00:00:00.001 Started by upstream project "autotest-per-patch" build number 132774 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.121 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.141 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.154 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.168 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.168 > git config core.sparsecheckout # timeout=10 00:00:05.180 > git read-tree -mu HEAD # timeout=10 00:00:05.197 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.220 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.220 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.354 [Pipeline] Start of Pipeline 00:00:05.373 [Pipeline] library 00:00:05.375 Loading library shm_lib@master 00:00:05.375 Library shm_lib@master is cached. Copying from home. 00:00:05.395 [Pipeline] node 00:01:23.148 Running on WFP20 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:23.150 [Pipeline] { 00:01:23.161 [Pipeline] catchError 00:01:23.163 [Pipeline] { 00:01:23.176 [Pipeline] wrap 00:01:23.185 [Pipeline] { 00:01:23.193 [Pipeline] stage 00:01:23.195 [Pipeline] { (Prologue) 00:01:23.390 [Pipeline] sh 00:01:23.838 + logger -p user.info -t JENKINS-CI 00:01:23.896 [Pipeline] echo 00:01:23.898 Node: WFP20 00:01:23.908 [Pipeline] sh 00:01:24.225 [Pipeline] setCustomBuildProperty 00:01:24.239 [Pipeline] echo 00:01:24.241 Cleanup processes 00:01:24.247 [Pipeline] sh 00:01:24.536 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.536 190151 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.551 [Pipeline] sh 00:01:24.839 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.839 ++ grep -v 'sudo pgrep' 00:01:24.839 ++ awk '{print $1}' 00:01:24.839 + sudo kill -9 00:01:24.839 + true 00:01:24.855 [Pipeline] cleanWs 00:01:24.866 [WS-CLEANUP] Deleting project workspace... 00:01:24.866 [WS-CLEANUP] Deferred wipeout is used... 00:01:24.874 [WS-CLEANUP] done 00:01:24.879 [Pipeline] setCustomBuildProperty 00:01:24.894 [Pipeline] sh 00:01:25.197 + sudo git config --global --replace-all safe.directory '*' 00:01:25.306 [Pipeline] httpRequest 00:01:25.709 [Pipeline] echo 00:01:25.711 Sorcerer 10.211.164.101 is alive 00:01:25.718 [Pipeline] retry 00:01:25.720 [Pipeline] { 00:01:25.733 [Pipeline] httpRequest 00:01:25.737 HttpMethod: GET 00:01:25.737 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:25.738 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:25.742 Response Code: HTTP/1.1 200 OK 00:01:25.742 Success: Status code 200 is in the accepted range: 200,404 00:01:25.742 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:25.888 [Pipeline] } 00:01:25.909 [Pipeline] // retry 00:01:25.918 [Pipeline] sh 00:01:26.197 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:26.210 [Pipeline] httpRequest 00:01:26.610 [Pipeline] echo 00:01:26.612 Sorcerer 10.211.164.101 is alive 00:01:26.620 [Pipeline] retry 00:01:26.622 [Pipeline] { 00:01:26.637 [Pipeline] httpRequest 00:01:26.641 HttpMethod: GET 00:01:26.642 URL: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 00:01:26.642 Sending request to url: http://10.211.164.101/packages/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 00:01:26.645 Response Code: HTTP/1.1 200 OK 00:01:26.646 Success: Status code 200 is in the accepted range: 200,404 00:01:26.646 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 00:01:28.920 [Pipeline] } 00:01:28.938 [Pipeline] // retry 00:01:28.947 [Pipeline] sh 00:01:29.231 + tar --no-same-owner -xf spdk_cabd61f7fcfe4266fd041fd1c59711acd76f4aff.tar.gz 00:01:31.782 [Pipeline] sh 00:01:32.083 + git -C spdk log --oneline -n5 00:01:32.083 cabd61f7f env: extend the page table to support 4-KiB mapping 00:01:32.083 66902d69a env: explicitly set --legacy-mem flag in no hugepages mode 00:01:32.083 421ce3854 env: add mem_map_fini and vtophys_fini to cleanup mem maps 00:01:32.083 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:32.083 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:32.094 [Pipeline] } 00:01:32.109 [Pipeline] // stage 00:01:32.119 [Pipeline] stage 00:01:32.122 [Pipeline] { (Prepare) 00:01:32.141 [Pipeline] writeFile 00:01:32.158 [Pipeline] sh 00:01:32.444 + logger -p user.info -t JENKINS-CI 00:01:32.458 [Pipeline] sh 00:01:32.745 + logger -p user.info -t JENKINS-CI 00:01:32.758 [Pipeline] sh 00:01:33.049 + cat autorun-spdk.conf 00:01:33.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.049 SPDK_TEST_NVMF=1 00:01:33.049 SPDK_TEST_NVME_CLI=1 00:01:33.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.049 SPDK_TEST_NVMF_NICS=e810 00:01:33.049 SPDK_TEST_VFIOUSER=1 00:01:33.049 SPDK_RUN_UBSAN=1 00:01:33.049 NET_TYPE=phy 00:01:33.056 RUN_NIGHTLY=0 00:01:33.060 [Pipeline] readFile 00:01:33.087 [Pipeline] withEnv 00:01:33.090 [Pipeline] { 00:01:33.102 [Pipeline] sh 00:01:33.389 + set -ex 00:01:33.389 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:33.389 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.389 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.389 ++ SPDK_TEST_NVMF=1 00:01:33.389 ++ SPDK_TEST_NVME_CLI=1 00:01:33.389 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.389 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.389 ++ SPDK_TEST_VFIOUSER=1 00:01:33.389 ++ SPDK_RUN_UBSAN=1 00:01:33.389 ++ NET_TYPE=phy 00:01:33.389 ++ RUN_NIGHTLY=0 00:01:33.389 + case $SPDK_TEST_NVMF_NICS in 00:01:33.389 + DRIVERS=ice 00:01:33.389 + [[ tcp == \r\d\m\a ]] 00:01:33.389 + [[ -n ice ]] 00:01:33.389 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:33.389 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:39.958 rmmod: ERROR: Module irdma is not currently loaded 00:01:39.958 rmmod: ERROR: Module i40iw is not currently loaded 00:01:39.958 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:39.958 + true 00:01:39.958 + for D in $DRIVERS 00:01:39.958 + sudo modprobe ice 00:01:39.958 + exit 0 00:01:39.970 [Pipeline] } 00:01:39.987 [Pipeline] // withEnv 00:01:39.993 [Pipeline] } 00:01:40.007 [Pipeline] // stage 00:01:40.018 [Pipeline] catchError 00:01:40.020 [Pipeline] { 00:01:40.034 [Pipeline] timeout 00:01:40.034 Timeout set to expire in 1 hr 0 min 00:01:40.036 [Pipeline] { 00:01:40.050 [Pipeline] stage 00:01:40.052 [Pipeline] { (Tests) 00:01:40.067 [Pipeline] sh 00:01:40.357 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.357 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.357 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.357 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:40.357 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.357 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.357 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:40.357 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.357 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:40.357 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:40.357 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:40.357 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:40.357 + source /etc/os-release 00:01:40.357 ++ NAME='Fedora Linux' 00:01:40.357 ++ VERSION='39 (Cloud Edition)' 00:01:40.357 ++ ID=fedora 00:01:40.357 ++ VERSION_ID=39 00:01:40.357 ++ VERSION_CODENAME= 00:01:40.357 ++ PLATFORM_ID=platform:f39 00:01:40.357 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.357 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.357 ++ LOGO=fedora-logo-icon 00:01:40.357 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.357 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.357 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.357 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.357 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.357 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.357 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.357 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.357 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.357 ++ SUPPORT_END=2024-11-12 00:01:40.357 ++ VARIANT='Cloud Edition' 00:01:40.357 ++ VARIANT_ID=cloud 00:01:40.357 + uname -a 00:01:40.357 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:40.357 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:43.655 Hugepages 00:01:43.655 node hugesize free / total 00:01:43.655 node0 1048576kB 0 / 0 00:01:43.655 node0 2048kB 0 / 0 00:01:43.655 node1 1048576kB 0 / 0 00:01:43.655 node1 2048kB 0 / 0 00:01:43.655 00:01:43.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.655 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:43.655 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:43.655 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:43.655 + rm -f /tmp/spdk-ld-path 00:01:43.655 + source autorun-spdk.conf 00:01:43.655 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.656 ++ SPDK_TEST_NVMF=1 00:01:43.656 ++ SPDK_TEST_NVME_CLI=1 00:01:43.656 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.656 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.656 ++ SPDK_TEST_VFIOUSER=1 00:01:43.656 ++ SPDK_RUN_UBSAN=1 00:01:43.656 ++ NET_TYPE=phy 00:01:43.656 ++ RUN_NIGHTLY=0 00:01:43.656 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.656 + [[ -n '' ]] 00:01:43.656 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.656 + for M in /var/spdk/build-*-manifest.txt 00:01:43.656 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:43.656 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.656 + for M in /var/spdk/build-*-manifest.txt 00:01:43.656 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.656 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.656 + for M in /var/spdk/build-*-manifest.txt 00:01:43.656 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.656 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:43.656 ++ uname 00:01:43.656 + [[ Linux == \L\i\n\u\x ]] 00:01:43.656 + sudo dmesg -T 00:01:43.656 + sudo dmesg --clear 00:01:43.656 + dmesg_pid=191064 00:01:43.656 + [[ Fedora Linux == FreeBSD ]] 00:01:43.656 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.656 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.656 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.656 + sudo dmesg -Tw 00:01:43.656 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.656 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.656 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.656 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.656 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.656 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.656 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.656 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.656 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.656 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.656 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.656 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.656 04:56:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:43.656 04:56:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:43.656 04:56:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:43.656 04:56:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:43.656 04:56:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.656 04:56:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:43.656 04:56:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.656 04:56:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:43.916 04:56:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.916 04:56:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.916 04:56:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.917 04:56:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.917 04:56:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.917 04:56:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.917 04:56:26 -- paths/export.sh@5 -- $ export PATH 00:01:43.917 04:56:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.917 04:56:26 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.917 04:56:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:43.917 04:56:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733716586.XXXXXX 00:01:43.917 04:56:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733716586.HgrlBB 00:01:43.917 04:56:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:43.917 04:56:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:43.917 04:56:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:43.917 04:56:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.917 04:56:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.917 04:56:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:43.917 04:56:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:43.917 04:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.917 04:56:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:43.917 04:56:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:43.917 04:56:26 -- pm/common@17 -- $ local monitor 00:01:43.917 04:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.917 04:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.917 04:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.917 04:56:26 -- pm/common@21 -- $ date +%s 00:01:43.917 04:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.917 04:56:26 -- pm/common@21 -- $ date +%s 00:01:43.917 04:56:26 -- pm/common@25 -- $ sleep 1 00:01:43.917 04:56:26 -- pm/common@21 -- $ date +%s 00:01:43.917 04:56:26 -- pm/common@21 -- $ date +%s 00:01:43.917 04:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716586 00:01:43.917 04:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716586 00:01:43.917 04:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716586 00:01:43.917 04:56:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733716586 00:01:43.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716586_collect-cpu-load.pm.log 00:01:43.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716586_collect-vmstat.pm.log 00:01:43.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716586_collect-cpu-temp.pm.log 00:01:43.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733716586_collect-bmc-pm.bmc.pm.log 00:01:44.856 04:56:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:44.856 04:56:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.856 04:56:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.856 04:56:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.856 04:56:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.856 Mon Dec 9 03:56:27 AM UTC 2024 00:01:44.856 04:56:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.856 v25.01-pre-279-gcabd61f7f 00:01:44.856 04:56:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.856 04:56:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.856 04:56:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.856 04:56:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.856 04:56:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.857 04:56:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.857 ************************************ 00:01:44.857 START TEST ubsan 00:01:44.857 ************************************ 00:01:44.857 04:56:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:44.857 using ubsan 00:01:44.857 00:01:44.857 real 0m0.001s 00:01:44.857 user 0m0.000s 00:01:44.857 sys 0m0.000s 00:01:44.857 04:56:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.857 04:56:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.857 ************************************ 00:01:44.857 END TEST ubsan 00:01:44.857 ************************************ 00:01:44.857 04:56:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:44.857 04:56:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:44.857 04:56:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:44.857 04:56:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:45.423 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:45.423 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.991 Using 'verbs' RDMA provider 00:02:01.866 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:14.096 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:14.362 Creating mk/config.mk...done. 00:02:14.362 Creating mk/cc.flags.mk...done. 00:02:14.362 Type 'make' to build. 00:02:14.362 04:56:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:14.362 04:56:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.362 04:56:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.362 04:56:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.362 ************************************ 00:02:14.362 START TEST make 00:02:14.362 ************************************ 00:02:14.362 04:56:56 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:14.932 make[1]: Nothing to be done for 'all'. 00:02:16.328 The Meson build system 00:02:16.328 Version: 1.5.0 00:02:16.328 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:16.328 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.328 Build type: native build 00:02:16.328 Project name: libvfio-user 00:02:16.328 Project version: 0.0.1 00:02:16.328 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.328 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.328 Host machine cpu family: x86_64 00:02:16.328 Host machine cpu: x86_64 00:02:16.328 Run-time dependency threads found: YES 00:02:16.328 Library dl found: YES 00:02:16.328 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.328 Run-time dependency json-c found: YES 0.17 00:02:16.328 Run-time dependency cmocka found: YES 1.1.7 00:02:16.328 Program pytest-3 found: NO 00:02:16.328 Program flake8 found: NO 00:02:16.328 Program misspell-fixer found: NO 00:02:16.328 Program restructuredtext-lint found: NO 00:02:16.328 Program valgrind found: YES (/usr/bin/valgrind) 00:02:16.328 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.328 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.328 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.328 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.328 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:16.328 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:16.328 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.328 Build targets in project: 8 00:02:16.328 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:16.328 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:16.328 00:02:16.328 libvfio-user 0.0.1 00:02:16.328 00:02:16.328 User defined options 00:02:16.328 buildtype : debug 00:02:16.328 default_library: shared 00:02:16.328 libdir : /usr/local/lib 00:02:16.328 00:02:16.328 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.587 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.587 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:16.587 [2/37] Compiling C object samples/null.p/null.c.o 00:02:16.587 [3/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:16.587 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:16.587 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:16.587 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:16.846 [7/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:16.846 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:16.846 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:16.846 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:16.846 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:16.846 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:16.846 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:16.846 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:16.846 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:16.846 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:16.846 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:16.846 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:16.846 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:16.846 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:16.846 [21/37] Compiling C object samples/client.p/client.c.o 00:02:16.846 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:16.846 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:16.846 [24/37] Compiling C object samples/server.p/server.c.o 00:02:16.846 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:16.846 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:16.846 [27/37] Linking target samples/client 00:02:16.846 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:16.846 [29/37] Linking target test/unit_tests 00:02:16.846 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:16.846 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:17.105 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:17.105 [33/37] Linking target samples/null 00:02:17.105 [34/37] Linking target samples/server 00:02:17.105 [35/37] Linking target samples/lspci 00:02:17.105 [36/37] Linking target samples/gpio-pci-idio-16 00:02:17.105 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:17.105 INFO: autodetecting backend as ninja 00:02:17.105 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.105 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.675 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.675 ninja: no work to do. 00:02:22.955 The Meson build system 00:02:22.955 Version: 1.5.0 00:02:22.955 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:22.955 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:22.955 Build type: native build 00:02:22.955 Program cat found: YES (/usr/bin/cat) 00:02:22.955 Project name: DPDK 00:02:22.955 Project version: 24.03.0 00:02:22.955 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.955 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.955 Host machine cpu family: x86_64 00:02:22.955 Host machine cpu: x86_64 00:02:22.955 Message: ## Building in Developer Mode ## 00:02:22.955 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.955 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.955 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.955 Program python3 found: YES (/usr/bin/python3) 00:02:22.955 Program cat found: YES (/usr/bin/cat) 00:02:22.955 Compiler for C supports arguments -march=native: YES 00:02:22.955 Checking for size of "void *" : 8 00:02:22.955 Checking for size of "void *" : 8 (cached) 00:02:22.955 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.955 Library m found: YES 00:02:22.955 Library numa found: YES 00:02:22.955 Has header "numaif.h" : YES 00:02:22.955 Library fdt found: NO 00:02:22.955 Library execinfo found: NO 00:02:22.955 Has header "execinfo.h" : YES 00:02:22.955 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.955 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.955 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.955 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.955 Run-time dependency openssl found: YES 3.1.1 00:02:22.955 Run-time dependency libpcap found: YES 1.10.4 00:02:22.955 Has header "pcap.h" with dependency libpcap: YES 00:02:22.955 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.955 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.955 Compiler for C supports arguments -Wformat: YES 00:02:22.955 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.955 Compiler for C supports arguments -Wformat-security: NO 00:02:22.955 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.955 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.955 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.955 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.955 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.955 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.955 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.955 Compiler for C supports arguments -Wundef: YES 00:02:22.955 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.955 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.955 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.955 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.955 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.955 Program objdump found: YES (/usr/bin/objdump) 00:02:22.955 Compiler for C supports arguments -mavx512f: YES 00:02:22.955 Checking if "AVX512 checking" compiles: YES 00:02:22.955 Fetching value of define "__SSE4_2__" : 1 00:02:22.955 Fetching value of define "__AES__" : 1 00:02:22.955 Fetching value of define "__AVX__" : 1 00:02:22.955 Fetching value of define "__AVX2__" : 1 00:02:22.955 Fetching value of define "__AVX512BW__" : 1 00:02:22.955 Fetching value of define "__AVX512CD__" : 1 00:02:22.955 Fetching value of define "__AVX512DQ__" : 1 00:02:22.955 Fetching value of define "__AVX512F__" : 1 00:02:22.955 Fetching value of define "__AVX512VL__" : 1 00:02:22.955 Fetching value of define "__PCLMUL__" : 1 00:02:22.955 Fetching value of define "__RDRND__" : 1 00:02:22.955 Fetching value of define "__RDSEED__" : 1 00:02:22.955 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.955 Fetching value of define "__znver1__" : (undefined) 00:02:22.955 Fetching value of define "__znver2__" : (undefined) 00:02:22.955 Fetching value of define "__znver3__" : (undefined) 00:02:22.955 Fetching value of define "__znver4__" : (undefined) 00:02:22.955 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.956 Message: lib/log: Defining dependency "log" 00:02:22.956 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.956 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.956 Checking for function "getentropy" : NO 00:02:22.956 Message: lib/eal: Defining dependency "eal" 00:02:22.956 Message: lib/ring: Defining dependency "ring" 00:02:22.956 Message: lib/rcu: Defining dependency "rcu" 00:02:22.956 Message: lib/mempool: Defining dependency "mempool" 00:02:22.956 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.956 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.956 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.956 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.956 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:22.956 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:22.956 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:22.956 Compiler for C supports arguments -mpclmul: YES 00:02:22.956 Compiler for C supports arguments -maes: YES 00:02:22.956 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.956 Compiler for C supports arguments -mavx512bw: YES 00:02:22.956 Compiler for C supports arguments -mavx512dq: YES 00:02:22.956 Compiler for C supports arguments -mavx512vl: YES 00:02:22.956 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.956 Compiler for C supports arguments -mavx2: YES 00:02:22.956 Compiler for C supports arguments -mavx: YES 00:02:22.956 Message: lib/net: Defining dependency "net" 00:02:22.956 Message: lib/meter: Defining dependency "meter" 00:02:22.956 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.956 Message: lib/pci: Defining dependency "pci" 00:02:22.956 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.956 Message: lib/hash: Defining dependency "hash" 00:02:22.956 Message: lib/timer: Defining dependency "timer" 00:02:22.956 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.956 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.956 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.956 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.956 Message: lib/power: Defining dependency "power" 00:02:22.956 Message: lib/reorder: Defining dependency "reorder" 00:02:22.956 Message: lib/security: Defining dependency "security" 00:02:22.956 Has header "linux/userfaultfd.h" : YES 00:02:22.956 Has header "linux/vduse.h" : YES 00:02:22.956 Message: lib/vhost: Defining dependency "vhost" 00:02:22.956 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.956 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.956 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.956 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.956 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.956 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.956 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.956 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.956 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.956 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.956 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.956 Configuring doxy-api-html.conf using configuration 00:02:22.956 Configuring doxy-api-man.conf using configuration 00:02:22.956 Program mandb found: YES (/usr/bin/mandb) 00:02:22.956 Program sphinx-build found: NO 00:02:22.956 Configuring rte_build_config.h using configuration 00:02:22.956 Message: 00:02:22.956 ================= 00:02:22.956 Applications Enabled 00:02:22.956 ================= 00:02:22.956 00:02:22.956 apps: 00:02:22.956 00:02:22.956 00:02:22.956 Message: 00:02:22.956 ================= 00:02:22.956 Libraries Enabled 00:02:22.956 ================= 00:02:22.956 00:02:22.956 libs: 00:02:22.956 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.956 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.956 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.956 00:02:22.956 Message: 00:02:22.956 =============== 00:02:22.956 Drivers Enabled 00:02:22.956 =============== 00:02:22.956 00:02:22.956 common: 00:02:22.956 00:02:22.956 bus: 00:02:22.956 pci, vdev, 00:02:22.956 mempool: 00:02:22.956 ring, 00:02:22.956 dma: 00:02:22.956 00:02:22.956 net: 00:02:22.956 00:02:22.956 crypto: 00:02:22.956 00:02:22.956 compress: 00:02:22.956 00:02:22.956 vdpa: 00:02:22.956 00:02:22.956 00:02:22.956 Message: 00:02:22.956 ================= 00:02:22.956 Content Skipped 00:02:22.956 ================= 00:02:22.956 00:02:22.956 apps: 00:02:22.956 dumpcap: explicitly disabled via build config 00:02:22.956 graph: explicitly disabled via build config 00:02:22.956 pdump: explicitly disabled via build config 00:02:22.956 proc-info: explicitly disabled via build config 00:02:22.956 test-acl: explicitly disabled via build config 00:02:22.956 test-bbdev: explicitly disabled via build config 00:02:22.956 test-cmdline: explicitly disabled via build config 00:02:22.956 test-compress-perf: explicitly disabled via build config 00:02:22.956 test-crypto-perf: explicitly disabled via build config 00:02:22.956 test-dma-perf: explicitly disabled via build config 00:02:22.956 test-eventdev: explicitly disabled via build config 00:02:22.956 test-fib: explicitly disabled via build config 00:02:22.956 test-flow-perf: explicitly disabled via build config 00:02:22.956 test-gpudev: explicitly disabled via build config 00:02:22.956 test-mldev: explicitly disabled via build config 00:02:22.956 test-pipeline: explicitly disabled via build config 00:02:22.956 test-pmd: explicitly disabled via build config 00:02:22.956 test-regex: explicitly disabled via build config 00:02:22.956 test-sad: explicitly disabled via build config 00:02:22.956 test-security-perf: explicitly disabled via build config 00:02:22.956 00:02:22.956 libs: 00:02:22.956 argparse: explicitly disabled via build config 00:02:22.956 metrics: explicitly disabled via build config 00:02:22.956 acl: explicitly disabled via build config 00:02:22.956 bbdev: explicitly disabled via build config 00:02:22.956 bitratestats: explicitly disabled via build config 00:02:22.956 bpf: explicitly disabled via build config 00:02:22.956 cfgfile: explicitly disabled via build config 00:02:22.956 distributor: explicitly disabled via build config 00:02:22.956 efd: explicitly disabled via build config 00:02:22.956 eventdev: explicitly disabled via build config 00:02:22.956 dispatcher: explicitly disabled via build config 00:02:22.956 gpudev: explicitly disabled via build config 00:02:22.956 gro: explicitly disabled via build config 00:02:22.956 gso: explicitly disabled via build config 00:02:22.956 ip_frag: explicitly disabled via build config 00:02:22.956 jobstats: explicitly disabled via build config 00:02:22.956 latencystats: explicitly disabled via build config 00:02:22.956 lpm: explicitly disabled via build config 00:02:22.956 member: explicitly disabled via build config 00:02:22.956 pcapng: explicitly disabled via build config 00:02:22.956 rawdev: explicitly disabled via build config 00:02:22.956 regexdev: explicitly disabled via build config 00:02:22.956 mldev: explicitly disabled via build config 00:02:22.956 rib: explicitly disabled via build config 00:02:22.956 sched: explicitly disabled via build config 00:02:22.956 stack: explicitly disabled via build config 00:02:22.956 ipsec: explicitly disabled via build config 00:02:22.956 pdcp: explicitly disabled via build config 00:02:22.956 fib: explicitly disabled via build config 00:02:22.957 port: explicitly disabled via build config 00:02:22.957 pdump: explicitly disabled via build config 00:02:22.957 table: explicitly disabled via build config 00:02:22.957 pipeline: explicitly disabled via build config 00:02:22.957 graph: explicitly disabled via build config 00:02:22.957 node: explicitly disabled via build config 00:02:22.957 00:02:22.957 drivers: 00:02:22.957 common/cpt: not in enabled drivers build config 00:02:22.957 common/dpaax: not in enabled drivers build config 00:02:22.957 common/iavf: not in enabled drivers build config 00:02:22.957 common/idpf: not in enabled drivers build config 00:02:22.957 common/ionic: not in enabled drivers build config 00:02:22.957 common/mvep: not in enabled drivers build config 00:02:22.957 common/octeontx: not in enabled drivers build config 00:02:22.957 bus/auxiliary: not in enabled drivers build config 00:02:22.957 bus/cdx: not in enabled drivers build config 00:02:22.957 bus/dpaa: not in enabled drivers build config 00:02:22.957 bus/fslmc: not in enabled drivers build config 00:02:22.957 bus/ifpga: not in enabled drivers build config 00:02:22.957 bus/platform: not in enabled drivers build config 00:02:22.957 bus/uacce: not in enabled drivers build config 00:02:22.957 bus/vmbus: not in enabled drivers build config 00:02:22.957 common/cnxk: not in enabled drivers build config 00:02:22.957 common/mlx5: not in enabled drivers build config 00:02:22.957 common/nfp: not in enabled drivers build config 00:02:22.957 common/nitrox: not in enabled drivers build config 00:02:22.957 common/qat: not in enabled drivers build config 00:02:22.957 common/sfc_efx: not in enabled drivers build config 00:02:22.957 mempool/bucket: not in enabled drivers build config 00:02:22.957 mempool/cnxk: not in enabled drivers build config 00:02:22.957 mempool/dpaa: not in enabled drivers build config 00:02:22.957 mempool/dpaa2: not in enabled drivers build config 00:02:22.957 mempool/octeontx: not in enabled drivers build config 00:02:22.957 mempool/stack: not in enabled drivers build config 00:02:22.957 dma/cnxk: not in enabled drivers build config 00:02:22.957 dma/dpaa: not in enabled drivers build config 00:02:22.957 dma/dpaa2: not in enabled drivers build config 00:02:22.957 dma/hisilicon: not in enabled drivers build config 00:02:22.957 dma/idxd: not in enabled drivers build config 00:02:22.957 dma/ioat: not in enabled drivers build config 00:02:22.957 dma/skeleton: not in enabled drivers build config 00:02:22.957 net/af_packet: not in enabled drivers build config 00:02:22.957 net/af_xdp: not in enabled drivers build config 00:02:22.957 net/ark: not in enabled drivers build config 00:02:22.957 net/atlantic: not in enabled drivers build config 00:02:22.957 net/avp: not in enabled drivers build config 00:02:22.957 net/axgbe: not in enabled drivers build config 00:02:22.957 net/bnx2x: not in enabled drivers build config 00:02:22.957 net/bnxt: not in enabled drivers build config 00:02:22.957 net/bonding: not in enabled drivers build config 00:02:22.957 net/cnxk: not in enabled drivers build config 00:02:22.957 net/cpfl: not in enabled drivers build config 00:02:22.957 net/cxgbe: not in enabled drivers build config 00:02:22.957 net/dpaa: not in enabled drivers build config 00:02:22.957 net/dpaa2: not in enabled drivers build config 00:02:22.957 net/e1000: not in enabled drivers build config 00:02:22.957 net/ena: not in enabled drivers build config 00:02:22.957 net/enetc: not in enabled drivers build config 00:02:22.957 net/enetfec: not in enabled drivers build config 00:02:22.957 net/enic: not in enabled drivers build config 00:02:22.957 net/failsafe: not in enabled drivers build config 00:02:22.957 net/fm10k: not in enabled drivers build config 00:02:22.957 net/gve: not in enabled drivers build config 00:02:22.957 net/hinic: not in enabled drivers build config 00:02:22.957 net/hns3: not in enabled drivers build config 00:02:22.957 net/i40e: not in enabled drivers build config 00:02:22.957 net/iavf: not in enabled drivers build config 00:02:22.957 net/ice: not in enabled drivers build config 00:02:22.957 net/idpf: not in enabled drivers build config 00:02:22.957 net/igc: not in enabled drivers build config 00:02:22.957 net/ionic: not in enabled drivers build config 00:02:22.957 net/ipn3ke: not in enabled drivers build config 00:02:22.957 net/ixgbe: not in enabled drivers build config 00:02:22.957 net/mana: not in enabled drivers build config 00:02:22.957 net/memif: not in enabled drivers build config 00:02:22.957 net/mlx4: not in enabled drivers build config 00:02:22.957 net/mlx5: not in enabled drivers build config 00:02:22.957 net/mvneta: not in enabled drivers build config 00:02:22.957 net/mvpp2: not in enabled drivers build config 00:02:22.957 net/netvsc: not in enabled drivers build config 00:02:22.957 net/nfb: not in enabled drivers build config 00:02:22.957 net/nfp: not in enabled drivers build config 00:02:22.957 net/ngbe: not in enabled drivers build config 00:02:22.957 net/null: not in enabled drivers build config 00:02:22.957 net/octeontx: not in enabled drivers build config 00:02:22.957 net/octeon_ep: not in enabled drivers build config 00:02:22.957 net/pcap: not in enabled drivers build config 00:02:22.957 net/pfe: not in enabled drivers build config 00:02:22.957 net/qede: not in enabled drivers build config 00:02:22.957 net/ring: not in enabled drivers build config 00:02:22.957 net/sfc: not in enabled drivers build config 00:02:22.957 net/softnic: not in enabled drivers build config 00:02:22.957 net/tap: not in enabled drivers build config 00:02:22.957 net/thunderx: not in enabled drivers build config 00:02:22.957 net/txgbe: not in enabled drivers build config 00:02:22.957 net/vdev_netvsc: not in enabled drivers build config 00:02:22.957 net/vhost: not in enabled drivers build config 00:02:22.957 net/virtio: not in enabled drivers build config 00:02:22.957 net/vmxnet3: not in enabled drivers build config 00:02:22.957 raw/*: missing internal dependency, "rawdev" 00:02:22.957 crypto/armv8: not in enabled drivers build config 00:02:22.957 crypto/bcmfs: not in enabled drivers build config 00:02:22.957 crypto/caam_jr: not in enabled drivers build config 00:02:22.957 crypto/ccp: not in enabled drivers build config 00:02:22.957 crypto/cnxk: not in enabled drivers build config 00:02:22.957 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.957 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.957 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.957 crypto/mlx5: not in enabled drivers build config 00:02:22.957 crypto/mvsam: not in enabled drivers build config 00:02:22.957 crypto/nitrox: not in enabled drivers build config 00:02:22.957 crypto/null: not in enabled drivers build config 00:02:22.957 crypto/octeontx: not in enabled drivers build config 00:02:22.957 crypto/openssl: not in enabled drivers build config 00:02:22.957 crypto/scheduler: not in enabled drivers build config 00:02:22.957 crypto/uadk: not in enabled drivers build config 00:02:22.957 crypto/virtio: not in enabled drivers build config 00:02:22.957 compress/isal: not in enabled drivers build config 00:02:22.957 compress/mlx5: not in enabled drivers build config 00:02:22.957 compress/nitrox: not in enabled drivers build config 00:02:22.957 compress/octeontx: not in enabled drivers build config 00:02:22.957 compress/zlib: not in enabled drivers build config 00:02:22.957 regex/*: missing internal dependency, "regexdev" 00:02:22.957 ml/*: missing internal dependency, "mldev" 00:02:22.957 vdpa/ifc: not in enabled drivers build config 00:02:22.957 vdpa/mlx5: not in enabled drivers build config 00:02:22.957 vdpa/nfp: not in enabled drivers build config 00:02:22.957 vdpa/sfc: not in enabled drivers build config 00:02:22.957 event/*: missing internal dependency, "eventdev" 00:02:22.957 baseband/*: missing internal dependency, "bbdev" 00:02:22.957 gpu/*: missing internal dependency, "gpudev" 00:02:22.957 00:02:22.957 00:02:23.217 Build targets in project: 85 00:02:23.217 00:02:23.217 DPDK 24.03.0 00:02:23.217 00:02:23.217 User defined options 00:02:23.217 buildtype : debug 00:02:23.217 default_library : shared 00:02:23.217 libdir : lib 00:02:23.242 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:23.242 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.242 c_link_args : 00:02:23.242 cpu_instruction_set: native 00:02:23.242 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:23.242 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:23.242 enable_docs : false 00:02:23.242 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:23.242 enable_kmods : false 00:02:23.242 max_lcores : 128 00:02:23.242 tests : false 00:02:23.242 00:02:23.242 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.508 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.770 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.770 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.770 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.770 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.770 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.770 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.770 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.770 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.771 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.771 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.771 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.771 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.771 [13/268] Linking static target lib/librte_kvargs.a 00:02:23.771 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.771 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.771 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.771 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.771 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.771 [19/268] Linking static target lib/librte_log.a 00:02:24.031 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.031 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.031 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.031 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.031 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.031 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.031 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.031 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.031 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.031 [29/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.031 [30/268] Linking static target lib/librte_pci.a 00:02:24.031 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.031 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.031 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.290 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.290 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.290 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.290 [37/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.290 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.290 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.290 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.290 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.290 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.290 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.290 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.290 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.290 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.290 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.290 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.290 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.290 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.290 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.290 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.290 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.290 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.290 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.290 [56/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.290 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.290 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.290 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.290 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.290 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.290 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.290 [63/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.290 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.290 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.290 [66/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.290 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.290 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.290 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.290 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.290 [71/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.290 [72/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.291 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.291 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.291 [75/268] Linking static target lib/librte_meter.a 00:02:24.291 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.291 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.291 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.291 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.291 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.291 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.291 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.291 [83/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.291 [84/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.291 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.291 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.291 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.291 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.291 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.291 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.291 [91/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.291 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.291 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.291 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.291 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.291 [96/268] Linking static target lib/librte_telemetry.a 00:02:24.291 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.291 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.291 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.291 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.291 [101/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.291 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.291 [103/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.291 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.291 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.291 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.291 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.550 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.550 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.550 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.550 [111/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.550 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.550 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.550 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.550 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.550 [116/268] Linking static target lib/librte_ring.a 00:02:24.550 [117/268] Linking static target lib/librte_mempool.a 00:02:24.550 [118/268] Linking static target lib/librte_rcu.a 00:02:24.550 [119/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.550 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.550 [121/268] Linking static target lib/librte_cmdline.a 00:02:24.550 [122/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.550 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.550 [124/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.550 [125/268] Linking static target lib/librte_timer.a 00:02:24.550 [126/268] Linking static target lib/librte_net.a 00:02:24.550 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.550 [128/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.550 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.550 [130/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.550 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.550 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.550 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.550 [134/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.550 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.550 [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.550 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.550 [138/268] Linking static target lib/librte_eal.a 00:02:24.550 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.550 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.550 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.550 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.550 [143/268] Linking static target lib/librte_dmadev.a 00:02:24.551 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.551 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.551 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.551 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.551 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.551 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.551 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.551 [151/268] Linking static target lib/librte_compressdev.a 00:02:24.551 [152/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.551 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.551 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.551 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.551 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.551 [157/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.551 [158/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.809 [159/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.809 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.809 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.809 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.809 [163/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.809 [164/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [165/268] Linking static target lib/librte_mbuf.a 00:02:24.809 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.809 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.809 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.809 [169/268] Linking static target lib/librte_reorder.a 00:02:24.809 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.809 [171/268] Linking target lib/librte_log.so.24.1 00:02:24.809 [172/268] Linking static target lib/librte_power.a 00:02:24.809 [173/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [174/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.809 [176/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.809 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.809 [179/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.809 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.809 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.809 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.809 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.809 [184/268] Linking static target lib/librte_security.a 00:02:24.809 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.809 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.809 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.809 [188/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.809 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.809 [190/268] Linking static target lib/librte_hash.a 00:02:24.809 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.809 [192/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.809 [193/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.809 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.809 [195/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:25.067 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.067 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.067 [198/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.067 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.067 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.067 [201/268] Linking target lib/librte_kvargs.so.24.1 00:02:25.067 [202/268] Linking static target lib/librte_cryptodev.a 00:02:25.067 [203/268] Linking target lib/librte_telemetry.so.24.1 00:02:25.067 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.067 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.067 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.068 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.068 [208/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.068 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.068 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.068 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:25.068 [212/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.068 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.326 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.326 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.326 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.326 [217/268] Linking static target lib/librte_ethdev.a 00:02:25.326 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.326 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.326 [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.585 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.585 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.586 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.844 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.844 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.103 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.103 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.671 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.671 [229/268] Linking static target lib/librte_vhost.a 00:02:27.247 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.148 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.707 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.609 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.609 [234/268] Linking target lib/librte_eal.so.24.1 00:02:37.609 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:37.609 [236/268] Linking target lib/librte_pci.so.24.1 00:02:37.609 [237/268] Linking target lib/librte_ring.so.24.1 00:02:37.609 [238/268] Linking target lib/librte_meter.so.24.1 00:02:37.609 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:37.609 [240/268] Linking target lib/librte_timer.so.24.1 00:02:37.609 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:37.609 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:37.609 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:37.609 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:37.609 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:37.609 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:37.868 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:37.868 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:37.868 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:37.868 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:37.868 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:37.868 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:37.868 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:38.127 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:38.127 [255/268] Linking target lib/librte_net.so.24.1 00:02:38.127 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:38.127 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:38.127 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:38.387 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:38.387 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:38.387 [261/268] Linking target lib/librte_hash.so.24.1 00:02:38.387 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:38.387 [263/268] Linking target lib/librte_security.so.24.1 00:02:38.387 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:38.387 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:38.387 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:38.646 [267/268] Linking target lib/librte_power.so.24.1 00:02:38.646 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:38.646 INFO: autodetecting backend as ninja 00:02:38.646 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:45.221 CC lib/ut/ut.o 00:02:45.221 CC lib/log/log.o 00:02:45.221 CC lib/ut_mock/mock.o 00:02:45.221 CC lib/log/log_flags.o 00:02:45.221 CC lib/log/log_deprecated.o 00:02:45.221 LIB libspdk_log.a 00:02:45.221 LIB libspdk_ut.a 00:02:45.221 LIB libspdk_ut_mock.a 00:02:45.221 SO libspdk_log.so.7.1 00:02:45.221 SO libspdk_ut_mock.so.6.0 00:02:45.221 SO libspdk_ut.so.2.0 00:02:45.221 SYMLINK libspdk_ut_mock.so 00:02:45.221 SYMLINK libspdk_ut.so 00:02:45.221 SYMLINK libspdk_log.so 00:02:45.221 CC lib/dma/dma.o 00:02:45.221 CC lib/ioat/ioat.o 00:02:45.221 CXX lib/trace_parser/trace.o 00:02:45.221 CC lib/util/base64.o 00:02:45.221 CC lib/util/bit_array.o 00:02:45.221 CC lib/util/cpuset.o 00:02:45.221 CC lib/util/crc16.o 00:02:45.221 CC lib/util/crc32.o 00:02:45.221 CC lib/util/crc32c.o 00:02:45.221 CC lib/util/crc32_ieee.o 00:02:45.221 CC lib/util/crc64.o 00:02:45.221 CC lib/util/dif.o 00:02:45.221 CC lib/util/fd.o 00:02:45.221 CC lib/util/hexlify.o 00:02:45.221 CC lib/util/fd_group.o 00:02:45.221 CC lib/util/file.o 00:02:45.221 CC lib/util/iov.o 00:02:45.221 CC lib/util/math.o 00:02:45.221 CC lib/util/strerror_tls.o 00:02:45.221 CC lib/util/net.o 00:02:45.221 CC lib/util/pipe.o 00:02:45.221 CC lib/util/uuid.o 00:02:45.221 CC lib/util/string.o 00:02:45.221 CC lib/util/xor.o 00:02:45.221 CC lib/util/zipf.o 00:02:45.221 CC lib/util/md5.o 00:02:45.480 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.480 CC lib/vfio_user/host/vfio_user.o 00:02:45.480 LIB libspdk_dma.a 00:02:45.480 SO libspdk_dma.so.5.0 00:02:45.480 LIB libspdk_ioat.a 00:02:45.480 SYMLINK libspdk_dma.so 00:02:45.480 SO libspdk_ioat.so.7.0 00:02:45.480 SYMLINK libspdk_ioat.so 00:02:45.480 LIB libspdk_vfio_user.a 00:02:45.739 LIB libspdk_util.a 00:02:45.739 SO libspdk_vfio_user.so.5.0 00:02:45.739 SYMLINK libspdk_vfio_user.so 00:02:45.739 SO libspdk_util.so.10.1 00:02:45.739 SYMLINK libspdk_util.so 00:02:46.309 CC lib/rdma_utils/rdma_utils.o 00:02:46.309 CC lib/conf/conf.o 00:02:46.309 CC lib/idxd/idxd.o 00:02:46.309 CC lib/json/json_parse.o 00:02:46.309 CC lib/vmd/vmd.o 00:02:46.309 CC lib/json/json_util.o 00:02:46.309 CC lib/vmd/led.o 00:02:46.309 CC lib/idxd/idxd_user.o 00:02:46.309 CC lib/env_dpdk/env.o 00:02:46.309 CC lib/idxd/idxd_kernel.o 00:02:46.309 CC lib/json/json_write.o 00:02:46.309 CC lib/env_dpdk/memory.o 00:02:46.309 CC lib/env_dpdk/pci.o 00:02:46.309 CC lib/env_dpdk/init.o 00:02:46.309 CC lib/env_dpdk/threads.o 00:02:46.309 CC lib/env_dpdk/pci_ioat.o 00:02:46.309 CC lib/env_dpdk/pci_virtio.o 00:02:46.309 CC lib/env_dpdk/pci_vmd.o 00:02:46.309 CC lib/env_dpdk/pci_idxd.o 00:02:46.309 CC lib/env_dpdk/pci_event.o 00:02:46.309 CC lib/env_dpdk/sigbus_handler.o 00:02:46.309 CC lib/env_dpdk/pci_dpdk.o 00:02:46.309 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:46.309 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.568 LIB libspdk_conf.a 00:02:46.568 LIB libspdk_rdma_utils.a 00:02:46.568 SO libspdk_conf.so.6.0 00:02:46.568 SO libspdk_rdma_utils.so.1.0 00:02:46.568 LIB libspdk_json.a 00:02:46.568 SO libspdk_json.so.6.0 00:02:46.568 SYMLINK libspdk_rdma_utils.so 00:02:46.568 SYMLINK libspdk_conf.so 00:02:46.568 LIB libspdk_trace_parser.a 00:02:46.568 SYMLINK libspdk_json.so 00:02:46.568 SO libspdk_trace_parser.so.6.0 00:02:46.827 LIB libspdk_idxd.a 00:02:46.827 SYMLINK libspdk_trace_parser.so 00:02:46.827 LIB libspdk_vmd.a 00:02:46.827 SO libspdk_idxd.so.12.1 00:02:46.827 SO libspdk_vmd.so.6.0 00:02:46.827 SYMLINK libspdk_idxd.so 00:02:46.827 SYMLINK libspdk_vmd.so 00:02:46.827 CC lib/rdma_provider/common.o 00:02:46.827 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:47.086 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.086 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.086 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.086 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.086 LIB libspdk_rdma_provider.a 00:02:47.086 SO libspdk_rdma_provider.so.7.0 00:02:47.086 LIB libspdk_jsonrpc.a 00:02:47.086 SYMLINK libspdk_rdma_provider.so 00:02:47.346 SO libspdk_jsonrpc.so.6.0 00:02:47.346 LIB libspdk_env_dpdk.a 00:02:47.346 SYMLINK libspdk_jsonrpc.so 00:02:47.346 SO libspdk_env_dpdk.so.15.1 00:02:47.346 SYMLINK libspdk_env_dpdk.so 00:02:47.606 CC lib/rpc/rpc.o 00:02:47.866 LIB libspdk_rpc.a 00:02:47.866 SO libspdk_rpc.so.6.0 00:02:47.866 SYMLINK libspdk_rpc.so 00:02:48.438 CC lib/notify/notify.o 00:02:48.438 CC lib/notify/notify_rpc.o 00:02:48.438 CC lib/trace/trace.o 00:02:48.438 CC lib/trace/trace_flags.o 00:02:48.438 CC lib/trace/trace_rpc.o 00:02:48.438 CC lib/keyring/keyring.o 00:02:48.438 CC lib/keyring/keyring_rpc.o 00:02:48.438 LIB libspdk_notify.a 00:02:48.438 SO libspdk_notify.so.6.0 00:02:48.438 LIB libspdk_keyring.a 00:02:48.438 LIB libspdk_trace.a 00:02:48.702 SO libspdk_trace.so.11.0 00:02:48.702 SO libspdk_keyring.so.2.0 00:02:48.702 SYMLINK libspdk_notify.so 00:02:48.702 SYMLINK libspdk_trace.so 00:02:48.702 SYMLINK libspdk_keyring.so 00:02:48.961 CC lib/thread/thread.o 00:02:48.961 CC lib/thread/iobuf.o 00:02:48.961 CC lib/sock/sock.o 00:02:48.961 CC lib/sock/sock_rpc.o 00:02:49.220 LIB libspdk_sock.a 00:02:49.480 SO libspdk_sock.so.10.0 00:02:49.480 SYMLINK libspdk_sock.so 00:02:49.739 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:49.739 CC lib/nvme/nvme_ctrlr.o 00:02:49.739 CC lib/nvme/nvme_fabric.o 00:02:49.739 CC lib/nvme/nvme_ns_cmd.o 00:02:49.739 CC lib/nvme/nvme_ns.o 00:02:49.739 CC lib/nvme/nvme_pcie_common.o 00:02:49.739 CC lib/nvme/nvme_pcie.o 00:02:49.739 CC lib/nvme/nvme_qpair.o 00:02:49.739 CC lib/nvme/nvme.o 00:02:49.739 CC lib/nvme/nvme_quirks.o 00:02:49.739 CC lib/nvme/nvme_transport.o 00:02:49.739 CC lib/nvme/nvme_discovery.o 00:02:49.739 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:49.739 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:49.739 CC lib/nvme/nvme_tcp.o 00:02:49.739 CC lib/nvme/nvme_opal.o 00:02:49.739 CC lib/nvme/nvme_io_msg.o 00:02:49.739 CC lib/nvme/nvme_poll_group.o 00:02:49.739 CC lib/nvme/nvme_zns.o 00:02:49.739 CC lib/nvme/nvme_stubs.o 00:02:49.739 CC lib/nvme/nvme_auth.o 00:02:49.739 CC lib/nvme/nvme_cuse.o 00:02:49.739 CC lib/nvme/nvme_vfio_user.o 00:02:49.739 CC lib/nvme/nvme_rdma.o 00:02:49.997 LIB libspdk_thread.a 00:02:49.997 SO libspdk_thread.so.11.0 00:02:50.256 SYMLINK libspdk_thread.so 00:02:50.515 CC lib/init/json_config.o 00:02:50.515 CC lib/init/subsystem.o 00:02:50.515 CC lib/init/subsystem_rpc.o 00:02:50.515 CC lib/init/rpc.o 00:02:50.515 CC lib/blob/blobstore.o 00:02:50.515 CC lib/blob/request.o 00:02:50.515 CC lib/blob/blob_bs_dev.o 00:02:50.515 CC lib/blob/zeroes.o 00:02:50.515 CC lib/fsdev/fsdev.o 00:02:50.515 CC lib/fsdev/fsdev_io.o 00:02:50.515 CC lib/fsdev/fsdev_rpc.o 00:02:50.515 CC lib/virtio/virtio.o 00:02:50.515 CC lib/virtio/virtio_vfio_user.o 00:02:50.515 CC lib/virtio/virtio_vhost_user.o 00:02:50.515 CC lib/virtio/virtio_pci.o 00:02:50.515 CC lib/accel/accel.o 00:02:50.515 CC lib/accel/accel_rpc.o 00:02:50.515 CC lib/accel/accel_sw.o 00:02:50.515 CC lib/vfu_tgt/tgt_rpc.o 00:02:50.515 CC lib/vfu_tgt/tgt_endpoint.o 00:02:50.776 LIB libspdk_init.a 00:02:50.776 SO libspdk_init.so.6.0 00:02:50.776 LIB libspdk_virtio.a 00:02:50.776 LIB libspdk_vfu_tgt.a 00:02:50.776 SYMLINK libspdk_init.so 00:02:50.776 SO libspdk_vfu_tgt.so.3.0 00:02:50.776 SO libspdk_virtio.so.7.0 00:02:51.034 SYMLINK libspdk_vfu_tgt.so 00:02:51.034 SYMLINK libspdk_virtio.so 00:02:51.034 LIB libspdk_fsdev.a 00:02:51.034 SO libspdk_fsdev.so.2.0 00:02:51.034 SYMLINK libspdk_fsdev.so 00:02:51.311 CC lib/event/app.o 00:02:51.311 CC lib/event/reactor.o 00:02:51.311 CC lib/event/log_rpc.o 00:02:51.311 CC lib/event/app_rpc.o 00:02:51.311 CC lib/event/scheduler_static.o 00:02:51.311 LIB libspdk_accel.a 00:02:51.311 SO libspdk_accel.so.16.0 00:02:51.311 LIB libspdk_nvme.a 00:02:51.311 SYMLINK libspdk_accel.so 00:02:51.571 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.571 LIB libspdk_event.a 00:02:51.571 SO libspdk_event.so.14.0 00:02:51.571 SO libspdk_nvme.so.15.0 00:02:51.571 SYMLINK libspdk_event.so 00:02:51.830 SYMLINK libspdk_nvme.so 00:02:51.830 CC lib/bdev/bdev.o 00:02:51.830 CC lib/bdev/bdev_rpc.o 00:02:51.830 CC lib/bdev/bdev_zone.o 00:02:51.830 CC lib/bdev/part.o 00:02:51.830 CC lib/bdev/scsi_nvme.o 00:02:52.090 LIB libspdk_fuse_dispatcher.a 00:02:52.090 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.090 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.659 LIB libspdk_blob.a 00:02:52.659 SO libspdk_blob.so.12.0 00:02:52.659 SYMLINK libspdk_blob.so 00:02:53.228 CC lib/lvol/lvol.o 00:02:53.228 CC lib/blobfs/blobfs.o 00:02:53.228 CC lib/blobfs/tree.o 00:02:53.796 LIB libspdk_bdev.a 00:02:53.796 SO libspdk_bdev.so.17.0 00:02:53.796 LIB libspdk_blobfs.a 00:02:53.796 SO libspdk_blobfs.so.11.0 00:02:53.796 LIB libspdk_lvol.a 00:02:53.796 SYMLINK libspdk_bdev.so 00:02:53.796 SO libspdk_lvol.so.11.0 00:02:53.796 SYMLINK libspdk_blobfs.so 00:02:53.796 SYMLINK libspdk_lvol.so 00:02:54.056 CC lib/ftl/ftl_core.o 00:02:54.056 CC lib/ftl/ftl_init.o 00:02:54.056 CC lib/nvmf/ctrlr.o 00:02:54.056 CC lib/scsi/dev.o 00:02:54.056 CC lib/ftl/ftl_layout.o 00:02:54.056 CC lib/ftl/ftl_debug.o 00:02:54.056 CC lib/scsi/port.o 00:02:54.056 CC lib/scsi/lun.o 00:02:54.056 CC lib/nbd/nbd.o 00:02:54.056 CC lib/nvmf/ctrlr_discovery.o 00:02:54.056 CC lib/ftl/ftl_io.o 00:02:54.056 CC lib/nbd/nbd_rpc.o 00:02:54.056 CC lib/ublk/ublk.o 00:02:54.056 CC lib/nvmf/ctrlr_bdev.o 00:02:54.056 CC lib/ftl/ftl_sb.o 00:02:54.056 CC lib/ublk/ublk_rpc.o 00:02:54.056 CC lib/scsi/scsi.o 00:02:54.056 CC lib/ftl/ftl_l2p.o 00:02:54.056 CC lib/nvmf/subsystem.o 00:02:54.056 CC lib/nvmf/nvmf.o 00:02:54.056 CC lib/scsi/scsi_bdev.o 00:02:54.056 CC lib/ftl/ftl_l2p_flat.o 00:02:54.056 CC lib/ftl/ftl_nv_cache.o 00:02:54.056 CC lib/scsi/scsi_pr.o 00:02:54.056 CC lib/nvmf/nvmf_rpc.o 00:02:54.056 CC lib/scsi/scsi_rpc.o 00:02:54.056 CC lib/ftl/ftl_band.o 00:02:54.056 CC lib/ftl/ftl_band_ops.o 00:02:54.056 CC lib/nvmf/transport.o 00:02:54.056 CC lib/scsi/task.o 00:02:54.056 CC lib/ftl/ftl_writer.o 00:02:54.056 CC lib/nvmf/tcp.o 00:02:54.056 CC lib/nvmf/stubs.o 00:02:54.056 CC lib/ftl/ftl_rq.o 00:02:54.056 CC lib/ftl/ftl_reloc.o 00:02:54.056 CC lib/nvmf/mdns_server.o 00:02:54.056 CC lib/nvmf/vfio_user.o 00:02:54.056 CC lib/ftl/ftl_l2p_cache.o 00:02:54.056 CC lib/nvmf/rdma.o 00:02:54.314 CC lib/nvmf/auth.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.314 CC lib/ftl/ftl_p2l_log.o 00:02:54.314 CC lib/ftl/ftl_p2l.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.314 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.314 CC lib/ftl/utils/ftl_conf.o 00:02:54.314 CC lib/ftl/utils/ftl_md.o 00:02:54.314 CC lib/ftl/utils/ftl_mempool.o 00:02:54.314 CC lib/ftl/utils/ftl_property.o 00:02:54.314 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.314 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.314 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.314 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.314 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.314 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.314 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.314 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.314 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.314 CC lib/ftl/base/ftl_base_dev.o 00:02:54.314 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.314 CC lib/ftl/ftl_trace.o 00:02:54.882 LIB libspdk_nbd.a 00:02:54.882 SO libspdk_nbd.so.7.0 00:02:54.882 SYMLINK libspdk_nbd.so 00:02:54.882 LIB libspdk_scsi.a 00:02:54.882 LIB libspdk_ublk.a 00:02:54.882 SO libspdk_ublk.so.3.0 00:02:54.882 SO libspdk_scsi.so.9.0 00:02:54.882 SYMLINK libspdk_ublk.so 00:02:55.154 SYMLINK libspdk_scsi.so 00:02:55.154 LIB libspdk_ftl.a 00:02:55.154 SO libspdk_ftl.so.9.0 00:02:55.412 CC lib/iscsi/conn.o 00:02:55.412 CC lib/iscsi/init_grp.o 00:02:55.412 CC lib/iscsi/param.o 00:02:55.412 CC lib/iscsi/iscsi.o 00:02:55.412 CC lib/iscsi/portal_grp.o 00:02:55.412 CC lib/iscsi/tgt_node.o 00:02:55.412 CC lib/iscsi/iscsi_subsystem.o 00:02:55.412 CC lib/iscsi/iscsi_rpc.o 00:02:55.412 CC lib/vhost/vhost.o 00:02:55.412 CC lib/iscsi/task.o 00:02:55.412 CC lib/vhost/vhost_rpc.o 00:02:55.412 CC lib/vhost/vhost_scsi.o 00:02:55.412 CC lib/vhost/vhost_blk.o 00:02:55.412 CC lib/vhost/rte_vhost_user.o 00:02:55.412 SYMLINK libspdk_ftl.so 00:02:55.982 LIB libspdk_nvmf.a 00:02:55.982 SO libspdk_nvmf.so.20.0 00:02:56.242 LIB libspdk_vhost.a 00:02:56.242 SYMLINK libspdk_nvmf.so 00:02:56.242 SO libspdk_vhost.so.8.0 00:02:56.242 SYMLINK libspdk_vhost.so 00:02:56.242 LIB libspdk_iscsi.a 00:02:56.504 SO libspdk_iscsi.so.8.0 00:02:56.504 SYMLINK libspdk_iscsi.so 00:02:57.074 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.074 CC module/vfu_device/vfu_virtio.o 00:02:57.074 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.074 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.074 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.074 CC module/vfu_device/vfu_virtio_fs.o 00:02:57.333 LIB libspdk_env_dpdk_rpc.a 00:02:57.333 CC module/sock/posix/posix.o 00:02:57.333 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.333 CC module/keyring/file/keyring_rpc.o 00:02:57.333 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.333 CC module/accel/ioat/accel_ioat.o 00:02:57.333 CC module/keyring/file/keyring.o 00:02:57.333 CC module/accel/error/accel_error.o 00:02:57.333 CC module/accel/error/accel_error_rpc.o 00:02:57.333 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.333 CC module/keyring/linux/keyring.o 00:02:57.333 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.333 CC module/accel/dsa/accel_dsa.o 00:02:57.333 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.333 CC module/keyring/linux/keyring_rpc.o 00:02:57.333 CC module/accel/iaa/accel_iaa.o 00:02:57.333 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.333 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.333 CC module/blob/bdev/blob_bdev.o 00:02:57.333 CC module/fsdev/aio/fsdev_aio.o 00:02:57.333 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.334 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.334 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.334 LIB libspdk_scheduler_gscheduler.a 00:02:57.334 LIB libspdk_keyring_linux.a 00:02:57.594 LIB libspdk_keyring_file.a 00:02:57.594 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.594 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.594 SO libspdk_keyring_linux.so.1.0 00:02:57.594 SO libspdk_keyring_file.so.2.0 00:02:57.594 LIB libspdk_accel_ioat.a 00:02:57.594 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.594 LIB libspdk_scheduler_dynamic.a 00:02:57.594 LIB libspdk_accel_iaa.a 00:02:57.594 LIB libspdk_accel_error.a 00:02:57.594 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.594 LIB libspdk_accel_dsa.a 00:02:57.594 SO libspdk_accel_ioat.so.6.0 00:02:57.594 SYMLINK libspdk_keyring_linux.so 00:02:57.594 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.594 SO libspdk_accel_iaa.so.3.0 00:02:57.594 SO libspdk_accel_error.so.2.0 00:02:57.594 SYMLINK libspdk_keyring_file.so 00:02:57.594 LIB libspdk_blob_bdev.a 00:02:57.594 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.594 SO libspdk_accel_dsa.so.5.0 00:02:57.594 SYMLINK libspdk_accel_ioat.so 00:02:57.594 SO libspdk_blob_bdev.so.12.0 00:02:57.594 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.594 SYMLINK libspdk_accel_iaa.so 00:02:57.594 SYMLINK libspdk_accel_error.so 00:02:57.594 LIB libspdk_vfu_device.a 00:02:57.594 SYMLINK libspdk_accel_dsa.so 00:02:57.594 SYMLINK libspdk_blob_bdev.so 00:02:57.594 SO libspdk_vfu_device.so.3.0 00:02:57.853 SYMLINK libspdk_vfu_device.so 00:02:57.853 LIB libspdk_fsdev_aio.a 00:02:57.853 LIB libspdk_sock_posix.a 00:02:57.854 SO libspdk_fsdev_aio.so.1.0 00:02:57.854 SO libspdk_sock_posix.so.6.0 00:02:58.114 SYMLINK libspdk_fsdev_aio.so 00:02:58.114 SYMLINK libspdk_sock_posix.so 00:02:58.114 CC module/bdev/delay/vbdev_delay.o 00:02:58.114 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.114 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.114 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.114 CC module/bdev/error/vbdev_error.o 00:02:58.114 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.114 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.114 CC module/bdev/ftl/bdev_ftl.o 00:02:58.114 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.114 CC module/bdev/aio/bdev_aio.o 00:02:58.114 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.114 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.114 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.114 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.114 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.114 CC module/bdev/split/vbdev_split.o 00:02:58.114 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.114 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.114 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.114 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.114 CC module/bdev/null/bdev_null.o 00:02:58.114 CC module/bdev/gpt/gpt.o 00:02:58.114 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.114 CC module/bdev/raid/bdev_raid.o 00:02:58.114 CC module/bdev/null/bdev_null_rpc.o 00:02:58.114 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.114 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.114 CC module/bdev/nvme/bdev_nvme.o 00:02:58.114 CC module/bdev/raid/raid0.o 00:02:58.114 CC module/bdev/malloc/bdev_malloc.o 00:02:58.114 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.114 CC module/bdev/raid/raid1.o 00:02:58.114 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.114 CC module/bdev/raid/concat.o 00:02:58.114 CC module/bdev/nvme/nvme_rpc.o 00:02:58.114 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.114 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.114 CC module/bdev/nvme/vbdev_opal.o 00:02:58.114 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.373 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.373 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.373 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.373 LIB libspdk_blobfs_bdev.a 00:02:58.634 SO libspdk_blobfs_bdev.so.6.0 00:02:58.634 LIB libspdk_bdev_split.a 00:02:58.634 LIB libspdk_bdev_error.a 00:02:58.634 LIB libspdk_bdev_ftl.a 00:02:58.634 SO libspdk_bdev_split.so.6.0 00:02:58.634 SO libspdk_bdev_error.so.6.0 00:02:58.634 LIB libspdk_bdev_null.a 00:02:58.634 LIB libspdk_bdev_zone_block.a 00:02:58.634 SO libspdk_bdev_ftl.so.6.0 00:02:58.634 LIB libspdk_bdev_gpt.a 00:02:58.634 SYMLINK libspdk_blobfs_bdev.so 00:02:58.634 LIB libspdk_bdev_passthru.a 00:02:58.634 LIB libspdk_bdev_delay.a 00:02:58.634 SO libspdk_bdev_null.so.6.0 00:02:58.634 LIB libspdk_bdev_aio.a 00:02:58.634 SO libspdk_bdev_passthru.so.6.0 00:02:58.634 SYMLINK libspdk_bdev_error.so 00:02:58.634 SO libspdk_bdev_zone_block.so.6.0 00:02:58.634 SO libspdk_bdev_gpt.so.6.0 00:02:58.634 SYMLINK libspdk_bdev_split.so 00:02:58.634 SYMLINK libspdk_bdev_ftl.so 00:02:58.634 SO libspdk_bdev_delay.so.6.0 00:02:58.634 LIB libspdk_bdev_iscsi.a 00:02:58.634 SO libspdk_bdev_aio.so.6.0 00:02:58.634 LIB libspdk_bdev_malloc.a 00:02:58.634 SYMLINK libspdk_bdev_passthru.so 00:02:58.634 SYMLINK libspdk_bdev_null.so 00:02:58.634 SO libspdk_bdev_iscsi.so.6.0 00:02:58.634 SYMLINK libspdk_bdev_zone_block.so 00:02:58.634 SYMLINK libspdk_bdev_gpt.so 00:02:58.634 SO libspdk_bdev_malloc.so.6.0 00:02:58.634 SYMLINK libspdk_bdev_delay.so 00:02:58.634 LIB libspdk_bdev_lvol.a 00:02:58.634 SYMLINK libspdk_bdev_aio.so 00:02:58.634 SYMLINK libspdk_bdev_iscsi.so 00:02:58.634 SO libspdk_bdev_lvol.so.6.0 00:02:58.634 LIB libspdk_bdev_virtio.a 00:02:58.894 SYMLINK libspdk_bdev_malloc.so 00:02:58.894 SO libspdk_bdev_virtio.so.6.0 00:02:58.894 SYMLINK libspdk_bdev_lvol.so 00:02:58.894 SYMLINK libspdk_bdev_virtio.so 00:02:59.154 LIB libspdk_bdev_raid.a 00:02:59.154 SO libspdk_bdev_raid.so.6.0 00:02:59.154 SYMLINK libspdk_bdev_raid.so 00:03:00.094 LIB libspdk_bdev_nvme.a 00:03:00.094 SO libspdk_bdev_nvme.so.7.1 00:03:00.354 SYMLINK libspdk_bdev_nvme.so 00:03:00.923 CC module/event/subsystems/keyring/keyring.o 00:03:00.923 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.923 CC module/event/subsystems/vmd/vmd.o 00:03:00.923 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:00.923 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.923 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.923 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.923 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.923 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.923 CC module/event/subsystems/sock/sock.o 00:03:01.182 LIB libspdk_event_sock.a 00:03:01.182 LIB libspdk_event_vhost_blk.a 00:03:01.182 LIB libspdk_event_keyring.a 00:03:01.182 LIB libspdk_event_scheduler.a 00:03:01.182 LIB libspdk_event_vfu_tgt.a 00:03:01.182 LIB libspdk_event_vmd.a 00:03:01.182 LIB libspdk_event_fsdev.a 00:03:01.182 LIB libspdk_event_iobuf.a 00:03:01.182 SO libspdk_event_keyring.so.1.0 00:03:01.182 SO libspdk_event_sock.so.5.0 00:03:01.182 SO libspdk_event_vhost_blk.so.3.0 00:03:01.182 SO libspdk_event_scheduler.so.4.0 00:03:01.182 SO libspdk_event_vfu_tgt.so.3.0 00:03:01.182 SO libspdk_event_fsdev.so.1.0 00:03:01.182 SO libspdk_event_vmd.so.6.0 00:03:01.182 SO libspdk_event_iobuf.so.3.0 00:03:01.182 SYMLINK libspdk_event_keyring.so 00:03:01.182 SYMLINK libspdk_event_scheduler.so 00:03:01.182 SYMLINK libspdk_event_sock.so 00:03:01.182 SYMLINK libspdk_event_vhost_blk.so 00:03:01.182 SYMLINK libspdk_event_fsdev.so 00:03:01.182 SYMLINK libspdk_event_vfu_tgt.so 00:03:01.182 SYMLINK libspdk_event_vmd.so 00:03:01.182 SYMLINK libspdk_event_iobuf.so 00:03:01.750 CC module/event/subsystems/accel/accel.o 00:03:01.750 LIB libspdk_event_accel.a 00:03:01.750 SO libspdk_event_accel.so.6.0 00:03:02.008 SYMLINK libspdk_event_accel.so 00:03:02.268 CC module/event/subsystems/bdev/bdev.o 00:03:02.527 LIB libspdk_event_bdev.a 00:03:02.527 SO libspdk_event_bdev.so.6.0 00:03:02.527 SYMLINK libspdk_event_bdev.so 00:03:02.795 CC module/event/subsystems/scsi/scsi.o 00:03:02.795 CC module/event/subsystems/ublk/ublk.o 00:03:02.795 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.795 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.795 CC module/event/subsystems/nbd/nbd.o 00:03:03.064 LIB libspdk_event_ublk.a 00:03:03.064 LIB libspdk_event_nbd.a 00:03:03.064 LIB libspdk_event_scsi.a 00:03:03.064 SO libspdk_event_scsi.so.6.0 00:03:03.064 SO libspdk_event_ublk.so.3.0 00:03:03.064 SO libspdk_event_nbd.so.6.0 00:03:03.064 LIB libspdk_event_nvmf.a 00:03:03.064 SYMLINK libspdk_event_scsi.so 00:03:03.064 SYMLINK libspdk_event_nbd.so 00:03:03.064 SYMLINK libspdk_event_ublk.so 00:03:03.064 SO libspdk_event_nvmf.so.6.0 00:03:03.323 SYMLINK libspdk_event_nvmf.so 00:03:03.582 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.582 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.582 LIB libspdk_event_vhost_scsi.a 00:03:03.582 LIB libspdk_event_iscsi.a 00:03:03.582 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.841 SO libspdk_event_iscsi.so.6.0 00:03:03.841 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.841 SYMLINK libspdk_event_iscsi.so 00:03:04.100 SO libspdk.so.6.0 00:03:04.100 SYMLINK libspdk.so 00:03:04.359 CC app/trace_record/trace_record.o 00:03:04.359 CC app/spdk_lspci/spdk_lspci.o 00:03:04.359 CXX app/trace/trace.o 00:03:04.359 CC test/rpc_client/rpc_client_test.o 00:03:04.359 CC app/spdk_top/spdk_top.o 00:03:04.359 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.359 CC app/spdk_nvme_perf/perf.o 00:03:04.359 CC app/spdk_nvme_identify/identify.o 00:03:04.359 TEST_HEADER include/spdk/accel_module.h 00:03:04.359 TEST_HEADER include/spdk/accel.h 00:03:04.359 TEST_HEADER include/spdk/assert.h 00:03:04.359 TEST_HEADER include/spdk/barrier.h 00:03:04.359 TEST_HEADER include/spdk/base64.h 00:03:04.359 TEST_HEADER include/spdk/bdev.h 00:03:04.359 TEST_HEADER include/spdk/bdev_module.h 00:03:04.359 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.359 TEST_HEADER include/spdk/bit_array.h 00:03:04.359 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.359 TEST_HEADER include/spdk/bit_pool.h 00:03:04.359 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.359 TEST_HEADER include/spdk/blobfs.h 00:03:04.359 TEST_HEADER include/spdk/config.h 00:03:04.359 TEST_HEADER include/spdk/conf.h 00:03:04.359 TEST_HEADER include/spdk/blob.h 00:03:04.359 TEST_HEADER include/spdk/cpuset.h 00:03:04.359 TEST_HEADER include/spdk/crc16.h 00:03:04.359 TEST_HEADER include/spdk/crc32.h 00:03:04.359 TEST_HEADER include/spdk/crc64.h 00:03:04.359 TEST_HEADER include/spdk/dif.h 00:03:04.359 TEST_HEADER include/spdk/endian.h 00:03:04.359 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.359 TEST_HEADER include/spdk/dma.h 00:03:04.359 TEST_HEADER include/spdk/fd_group.h 00:03:04.359 TEST_HEADER include/spdk/event.h 00:03:04.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.359 TEST_HEADER include/spdk/env.h 00:03:04.359 TEST_HEADER include/spdk/file.h 00:03:04.359 CC app/nvmf_tgt/nvmf_main.o 00:03:04.359 TEST_HEADER include/spdk/fsdev.h 00:03:04.359 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.359 TEST_HEADER include/spdk/fd.h 00:03:04.359 TEST_HEADER include/spdk/ftl.h 00:03:04.359 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.359 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.359 TEST_HEADER include/spdk/hexlify.h 00:03:04.359 TEST_HEADER include/spdk/idxd.h 00:03:04.359 TEST_HEADER include/spdk/histogram_data.h 00:03:04.359 TEST_HEADER include/spdk/init.h 00:03:04.359 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.359 TEST_HEADER include/spdk/ioat.h 00:03:04.359 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.359 TEST_HEADER include/spdk/json.h 00:03:04.359 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.359 CC app/spdk_dd/spdk_dd.o 00:03:04.359 TEST_HEADER include/spdk/keyring.h 00:03:04.359 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.359 TEST_HEADER include/spdk/likely.h 00:03:04.359 TEST_HEADER include/spdk/lvol.h 00:03:04.359 TEST_HEADER include/spdk/keyring_module.h 00:03:04.359 TEST_HEADER include/spdk/log.h 00:03:04.359 TEST_HEADER include/spdk/mmio.h 00:03:04.359 TEST_HEADER include/spdk/nbd.h 00:03:04.359 TEST_HEADER include/spdk/net.h 00:03:04.359 TEST_HEADER include/spdk/memory.h 00:03:04.359 TEST_HEADER include/spdk/md5.h 00:03:04.359 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.359 TEST_HEADER include/spdk/nvme.h 00:03:04.359 TEST_HEADER include/spdk/notify.h 00:03:04.359 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.359 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.359 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.359 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.359 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.359 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.359 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.359 TEST_HEADER include/spdk/nvmf.h 00:03:04.359 TEST_HEADER include/spdk/opal_spec.h 00:03:04.359 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.359 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.359 CC app/spdk_tgt/spdk_tgt.o 00:03:04.359 TEST_HEADER include/spdk/opal.h 00:03:04.359 TEST_HEADER include/spdk/pipe.h 00:03:04.359 TEST_HEADER include/spdk/reduce.h 00:03:04.359 TEST_HEADER include/spdk/queue.h 00:03:04.359 TEST_HEADER include/spdk/pci_ids.h 00:03:04.359 TEST_HEADER include/spdk/rpc.h 00:03:04.359 TEST_HEADER include/spdk/scheduler.h 00:03:04.359 TEST_HEADER include/spdk/sock.h 00:03:04.359 TEST_HEADER include/spdk/scsi.h 00:03:04.359 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.359 TEST_HEADER include/spdk/stdinc.h 00:03:04.359 TEST_HEADER include/spdk/thread.h 00:03:04.359 TEST_HEADER include/spdk/string.h 00:03:04.359 TEST_HEADER include/spdk/trace_parser.h 00:03:04.359 TEST_HEADER include/spdk/tree.h 00:03:04.359 TEST_HEADER include/spdk/trace.h 00:03:04.359 TEST_HEADER include/spdk/ublk.h 00:03:04.359 TEST_HEADER include/spdk/util.h 00:03:04.359 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.359 TEST_HEADER include/spdk/uuid.h 00:03:04.359 TEST_HEADER include/spdk/version.h 00:03:04.359 TEST_HEADER include/spdk/vhost.h 00:03:04.359 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.359 TEST_HEADER include/spdk/vmd.h 00:03:04.359 TEST_HEADER include/spdk/xor.h 00:03:04.359 TEST_HEADER include/spdk/zipf.h 00:03:04.359 CXX test/cpp_headers/accel.o 00:03:04.359 CXX test/cpp_headers/accel_module.o 00:03:04.359 CXX test/cpp_headers/barrier.o 00:03:04.359 CXX test/cpp_headers/base64.o 00:03:04.359 CXX test/cpp_headers/assert.o 00:03:04.359 CXX test/cpp_headers/bdev.o 00:03:04.359 CXX test/cpp_headers/bdev_zone.o 00:03:04.359 CXX test/cpp_headers/bit_array.o 00:03:04.359 CXX test/cpp_headers/bdev_module.o 00:03:04.359 CXX test/cpp_headers/bit_pool.o 00:03:04.359 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.360 CXX test/cpp_headers/blobfs.o 00:03:04.360 CXX test/cpp_headers/blob_bdev.o 00:03:04.360 CXX test/cpp_headers/conf.o 00:03:04.360 CXX test/cpp_headers/blob.o 00:03:04.360 CXX test/cpp_headers/config.o 00:03:04.360 CXX test/cpp_headers/cpuset.o 00:03:04.360 CXX test/cpp_headers/crc16.o 00:03:04.360 CXX test/cpp_headers/crc32.o 00:03:04.360 CXX test/cpp_headers/crc64.o 00:03:04.360 CXX test/cpp_headers/dif.o 00:03:04.360 CXX test/cpp_headers/endian.o 00:03:04.360 CXX test/cpp_headers/dma.o 00:03:04.360 CXX test/cpp_headers/env_dpdk.o 00:03:04.360 CXX test/cpp_headers/env.o 00:03:04.633 CXX test/cpp_headers/event.o 00:03:04.633 CXX test/cpp_headers/fd_group.o 00:03:04.633 CXX test/cpp_headers/file.o 00:03:04.633 CXX test/cpp_headers/fd.o 00:03:04.633 CXX test/cpp_headers/fsdev_module.o 00:03:04.633 CXX test/cpp_headers/fsdev.o 00:03:04.633 CXX test/cpp_headers/ftl.o 00:03:04.633 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.633 CXX test/cpp_headers/idxd.o 00:03:04.633 CXX test/cpp_headers/histogram_data.o 00:03:04.633 CXX test/cpp_headers/gpt_spec.o 00:03:04.633 CXX test/cpp_headers/hexlify.o 00:03:04.633 CXX test/cpp_headers/idxd_spec.o 00:03:04.633 CXX test/cpp_headers/ioat_spec.o 00:03:04.633 CXX test/cpp_headers/init.o 00:03:04.633 CXX test/cpp_headers/ioat.o 00:03:04.633 CXX test/cpp_headers/json.o 00:03:04.633 CXX test/cpp_headers/jsonrpc.o 00:03:04.633 CXX test/cpp_headers/iscsi_spec.o 00:03:04.633 CXX test/cpp_headers/keyring.o 00:03:04.633 CXX test/cpp_headers/keyring_module.o 00:03:04.633 CXX test/cpp_headers/likely.o 00:03:04.633 CXX test/cpp_headers/log.o 00:03:04.633 CXX test/cpp_headers/md5.o 00:03:04.633 CXX test/cpp_headers/mmio.o 00:03:04.633 CXX test/cpp_headers/lvol.o 00:03:04.633 CXX test/cpp_headers/nbd.o 00:03:04.633 CXX test/cpp_headers/memory.o 00:03:04.633 CXX test/cpp_headers/net.o 00:03:04.633 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.633 CXX test/cpp_headers/nvme_intel.o 00:03:04.633 CXX test/cpp_headers/notify.o 00:03:04.633 CXX test/cpp_headers/nvme.o 00:03:04.633 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.633 CXX test/cpp_headers/nvme_spec.o 00:03:04.633 CXX test/cpp_headers/nvme_zns.o 00:03:04.633 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.633 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.633 CXX test/cpp_headers/nvmf.o 00:03:04.633 CXX test/cpp_headers/nvmf_spec.o 00:03:04.633 CXX test/cpp_headers/nvmf_transport.o 00:03:04.633 CXX test/cpp_headers/opal.o 00:03:04.633 CXX test/cpp_headers/opal_spec.o 00:03:04.633 CXX test/cpp_headers/pci_ids.o 00:03:04.633 CXX test/cpp_headers/queue.o 00:03:04.633 CXX test/cpp_headers/pipe.o 00:03:04.633 CXX test/cpp_headers/reduce.o 00:03:04.633 CXX test/cpp_headers/rpc.o 00:03:04.633 CXX test/cpp_headers/scheduler.o 00:03:04.633 CXX test/cpp_headers/scsi.o 00:03:04.633 CXX test/cpp_headers/scsi_spec.o 00:03:04.633 CXX test/cpp_headers/sock.o 00:03:04.633 CXX test/cpp_headers/stdinc.o 00:03:04.633 CXX test/cpp_headers/string.o 00:03:04.633 CXX test/cpp_headers/thread.o 00:03:04.633 CXX test/cpp_headers/trace.o 00:03:04.633 CXX test/cpp_headers/trace_parser.o 00:03:04.633 CXX test/cpp_headers/tree.o 00:03:04.633 CC examples/ioat/verify/verify.o 00:03:04.633 CC examples/ioat/perf/perf.o 00:03:04.633 CC test/thread/poller_perf/poller_perf.o 00:03:04.633 CC test/app/stub/stub.o 00:03:04.633 CC test/env/vtophys/vtophys.o 00:03:04.633 LINK spdk_lspci 00:03:04.633 CC examples/util/zipf/zipf.o 00:03:04.634 CC test/app/histogram_perf/histogram_perf.o 00:03:04.634 CXX test/cpp_headers/ublk.o 00:03:04.634 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.634 CC test/env/pci/pci_ut.o 00:03:04.634 CC test/env/memory/memory_ut.o 00:03:04.634 CC app/fio/nvme/fio_plugin.o 00:03:04.634 CC test/app/jsoncat/jsoncat.o 00:03:04.634 CC app/fio/bdev/fio_plugin.o 00:03:04.634 CC test/dma/test_dma/test_dma.o 00:03:04.634 CXX test/cpp_headers/util.o 00:03:04.913 CC test/app/bdev_svc/bdev_svc.o 00:03:04.913 LINK rpc_client_test 00:03:04.913 LINK interrupt_tgt 00:03:05.182 LINK nvmf_tgt 00:03:05.182 LINK spdk_nvme_discover 00:03:05.182 LINK spdk_trace_record 00:03:05.182 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.182 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.182 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.182 LINK iscsi_tgt 00:03:05.441 CXX test/cpp_headers/uuid.o 00:03:05.441 CXX test/cpp_headers/version.o 00:03:05.441 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.441 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.441 LINK jsoncat 00:03:05.441 LINK histogram_perf 00:03:05.441 LINK poller_perf 00:03:05.441 CXX test/cpp_headers/vhost.o 00:03:05.441 CXX test/cpp_headers/vmd.o 00:03:05.441 CXX test/cpp_headers/xor.o 00:03:05.441 CXX test/cpp_headers/zipf.o 00:03:05.441 LINK env_dpdk_post_init 00:03:05.441 LINK vtophys 00:03:05.441 LINK zipf 00:03:05.441 LINK spdk_tgt 00:03:05.441 LINK stub 00:03:05.441 LINK verify 00:03:05.441 LINK ioat_perf 00:03:05.441 LINK bdev_svc 00:03:05.441 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.441 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.441 LINK spdk_dd 00:03:05.441 LINK spdk_trace 00:03:05.699 LINK pci_ut 00:03:05.699 LINK spdk_nvme 00:03:05.699 LINK test_dma 00:03:05.699 LINK spdk_bdev 00:03:05.699 LINK spdk_nvme_perf 00:03:05.700 LINK nvme_fuzz 00:03:05.700 LINK spdk_nvme_identify 00:03:05.700 CC test/event/event_perf/event_perf.o 00:03:05.700 LINK spdk_top 00:03:05.700 CC test/event/reactor_perf/reactor_perf.o 00:03:05.700 LINK vhost_fuzz 00:03:05.700 CC test/event/reactor/reactor.o 00:03:05.959 LINK mem_callbacks 00:03:05.959 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.959 CC test/event/app_repeat/app_repeat.o 00:03:05.959 CC examples/idxd/perf/perf.o 00:03:05.959 CC examples/sock/hello_world/hello_sock.o 00:03:05.959 CC examples/vmd/led/led.o 00:03:05.959 CC app/vhost/vhost.o 00:03:05.959 CC examples/thread/thread/thread_ex.o 00:03:05.959 CC test/event/scheduler/scheduler.o 00:03:05.959 LINK reactor_perf 00:03:05.959 LINK event_perf 00:03:05.959 LINK reactor 00:03:05.959 LINK lsvmd 00:03:05.959 LINK led 00:03:05.959 LINK app_repeat 00:03:06.219 LINK vhost 00:03:06.219 LINK hello_sock 00:03:06.219 LINK scheduler 00:03:06.219 LINK idxd_perf 00:03:06.219 LINK thread 00:03:06.219 LINK memory_ut 00:03:06.219 CC test/nvme/e2edp/nvme_dp.o 00:03:06.219 CC test/blobfs/mkfs/mkfs.o 00:03:06.219 CC test/nvme/reset/reset.o 00:03:06.219 CC test/nvme/startup/startup.o 00:03:06.219 CC test/nvme/cuse/cuse.o 00:03:06.219 CC test/nvme/aer/aer.o 00:03:06.219 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.219 CC test/nvme/boot_partition/boot_partition.o 00:03:06.219 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.219 CC test/nvme/reserve/reserve.o 00:03:06.219 CC test/nvme/connect_stress/connect_stress.o 00:03:06.219 CC test/nvme/sgl/sgl.o 00:03:06.219 CC test/nvme/overhead/overhead.o 00:03:06.219 CC test/nvme/simple_copy/simple_copy.o 00:03:06.219 CC test/nvme/fdp/fdp.o 00:03:06.219 CC test/nvme/compliance/nvme_compliance.o 00:03:06.219 CC test/nvme/err_injection/err_injection.o 00:03:06.219 CC test/accel/dif/dif.o 00:03:06.479 CC test/lvol/esnap/esnap.o 00:03:06.479 LINK startup 00:03:06.479 LINK doorbell_aers 00:03:06.479 LINK fused_ordering 00:03:06.479 LINK boot_partition 00:03:06.479 LINK connect_stress 00:03:06.479 LINK mkfs 00:03:06.479 LINK reserve 00:03:06.479 LINK err_injection 00:03:06.479 LINK simple_copy 00:03:06.479 LINK reset 00:03:06.479 LINK nvme_dp 00:03:06.479 LINK sgl 00:03:06.479 LINK aer 00:03:06.479 LINK overhead 00:03:06.479 LINK nvme_compliance 00:03:06.479 LINK fdp 00:03:06.479 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.479 CC examples/nvme/arbitration/arbitration.o 00:03:06.479 CC examples/nvme/hotplug/hotplug.o 00:03:06.479 CC examples/nvme/abort/abort.o 00:03:06.479 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.479 CC examples/nvme/reconnect/reconnect.o 00:03:06.479 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.479 CC examples/nvme/hello_world/hello_world.o 00:03:06.738 LINK iscsi_fuzz 00:03:06.738 CC examples/accel/perf/accel_perf.o 00:03:06.738 CC examples/blob/cli/blobcli.o 00:03:06.738 CC examples/blob/hello_world/hello_blob.o 00:03:06.738 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.738 LINK pmr_persistence 00:03:06.738 LINK cmb_copy 00:03:06.738 LINK dif 00:03:06.738 LINK hello_world 00:03:06.738 LINK hotplug 00:03:06.738 LINK arbitration 00:03:06.998 LINK reconnect 00:03:06.998 LINK abort 00:03:06.998 LINK nvme_manage 00:03:06.998 LINK hello_blob 00:03:06.998 LINK hello_fsdev 00:03:06.998 LINK accel_perf 00:03:07.258 LINK blobcli 00:03:07.258 LINK cuse 00:03:07.520 CC test/bdev/bdevio/bdevio.o 00:03:07.520 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.520 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.780 LINK bdevio 00:03:07.780 LINK hello_bdev 00:03:08.349 LINK bdevperf 00:03:08.918 CC examples/nvmf/nvmf/nvmf.o 00:03:09.177 LINK nvmf 00:03:10.117 LINK esnap 00:03:10.377 00:03:10.377 real 0m55.915s 00:03:10.377 user 7m52.425s 00:03:10.377 sys 4m20.911s 00:03:10.377 04:57:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.377 04:57:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.377 ************************************ 00:03:10.377 END TEST make 00:03:10.377 ************************************ 00:03:10.377 04:57:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.377 04:57:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.377 04:57:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.377 04:57:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.377 04:57:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.377 04:57:52 -- pm/common@44 -- $ pid=191106 00:03:10.377 04:57:52 -- pm/common@50 -- $ kill -TERM 191106 00:03:10.377 04:57:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.377 04:57:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.377 04:57:52 -- pm/common@44 -- $ pid=191108 00:03:10.377 04:57:52 -- pm/common@50 -- $ kill -TERM 191108 00:03:10.377 04:57:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.377 04:57:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.377 04:57:52 -- pm/common@44 -- $ pid=191110 00:03:10.377 04:57:52 -- pm/common@50 -- $ kill -TERM 191110 00:03:10.377 04:57:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.377 04:57:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.377 04:57:52 -- pm/common@44 -- $ pid=191134 00:03:10.377 04:57:52 -- pm/common@50 -- $ sudo -E kill -TERM 191134 00:03:10.377 04:57:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.377 04:57:52 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.637 04:57:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:10.637 04:57:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:10.637 04:57:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:10.637 04:57:52 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:10.637 04:57:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.637 04:57:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.637 04:57:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.637 04:57:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.637 04:57:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.637 04:57:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.637 04:57:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.637 04:57:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.637 04:57:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.637 04:57:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.637 04:57:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.637 04:57:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.638 04:57:52 -- scripts/common.sh@345 -- # : 1 00:03:10.638 04:57:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.638 04:57:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.638 04:57:52 -- scripts/common.sh@365 -- # decimal 1 00:03:10.638 04:57:52 -- scripts/common.sh@353 -- # local d=1 00:03:10.638 04:57:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.638 04:57:52 -- scripts/common.sh@355 -- # echo 1 00:03:10.638 04:57:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.638 04:57:52 -- scripts/common.sh@366 -- # decimal 2 00:03:10.638 04:57:52 -- scripts/common.sh@353 -- # local d=2 00:03:10.638 04:57:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.638 04:57:52 -- scripts/common.sh@355 -- # echo 2 00:03:10.638 04:57:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.638 04:57:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.638 04:57:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.638 04:57:52 -- scripts/common.sh@368 -- # return 0 00:03:10.638 04:57:52 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.638 04:57:52 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:10.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.638 --rc genhtml_branch_coverage=1 00:03:10.638 --rc genhtml_function_coverage=1 00:03:10.638 --rc genhtml_legend=1 00:03:10.638 --rc geninfo_all_blocks=1 00:03:10.638 --rc geninfo_unexecuted_blocks=1 00:03:10.638 00:03:10.638 ' 00:03:10.638 04:57:52 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:10.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.638 --rc genhtml_branch_coverage=1 00:03:10.638 --rc genhtml_function_coverage=1 00:03:10.638 --rc genhtml_legend=1 00:03:10.638 --rc geninfo_all_blocks=1 00:03:10.638 --rc geninfo_unexecuted_blocks=1 00:03:10.638 00:03:10.638 ' 00:03:10.638 04:57:52 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:10.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.638 --rc genhtml_branch_coverage=1 00:03:10.638 --rc genhtml_function_coverage=1 00:03:10.638 --rc genhtml_legend=1 00:03:10.638 --rc geninfo_all_blocks=1 00:03:10.638 --rc geninfo_unexecuted_blocks=1 00:03:10.638 00:03:10.638 ' 00:03:10.638 04:57:52 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:10.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.638 --rc genhtml_branch_coverage=1 00:03:10.638 --rc genhtml_function_coverage=1 00:03:10.638 --rc genhtml_legend=1 00:03:10.638 --rc geninfo_all_blocks=1 00:03:10.638 --rc geninfo_unexecuted_blocks=1 00:03:10.638 00:03:10.638 ' 00:03:10.638 04:57:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.638 04:57:52 -- nvmf/common.sh@7 -- # uname -s 00:03:10.638 04:57:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.638 04:57:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.638 04:57:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.638 04:57:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.638 04:57:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.638 04:57:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.638 04:57:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.638 04:57:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.638 04:57:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.638 04:57:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.638 04:57:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:10.638 04:57:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:10.638 04:57:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.638 04:57:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.638 04:57:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:10.638 04:57:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.638 04:57:52 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.638 04:57:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.638 04:57:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.638 04:57:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.638 04:57:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.638 04:57:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.638 04:57:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.638 04:57:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.638 04:57:52 -- paths/export.sh@5 -- # export PATH 00:03:10.638 04:57:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.638 04:57:52 -- nvmf/common.sh@51 -- # : 0 00:03:10.638 04:57:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.638 04:57:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.638 04:57:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.638 04:57:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.638 04:57:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.638 04:57:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.638 04:57:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.638 04:57:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.638 04:57:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.638 04:57:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.638 04:57:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.638 04:57:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.638 04:57:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.638 04:57:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.638 04:57:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.638 04:57:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.638 04:57:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.638 04:57:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.638 04:57:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.638 04:57:53 -- spdk/autotest.sh@48 -- # udevadm_pid=255268 00:03:10.638 04:57:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.638 04:57:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.638 04:57:53 -- pm/common@17 -- # local monitor 00:03:10.638 04:57:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.638 04:57:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.638 04:57:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.638 04:57:53 -- pm/common@21 -- # date +%s 00:03:10.638 04:57:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.638 04:57:53 -- pm/common@21 -- # date +%s 00:03:10.638 04:57:53 -- pm/common@25 -- # sleep 1 00:03:10.638 04:57:53 -- pm/common@21 -- # date +%s 00:03:10.638 04:57:53 -- pm/common@21 -- # date +%s 00:03:10.638 04:57:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716673 00:03:10.638 04:57:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716673 00:03:10.638 04:57:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716673 00:03:10.638 04:57:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733716673 00:03:10.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716673_collect-cpu-load.pm.log 00:03:10.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716673_collect-vmstat.pm.log 00:03:10.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716673_collect-cpu-temp.pm.log 00:03:10.638 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733716673_collect-bmc-pm.bmc.pm.log 00:03:11.574 04:57:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.574 04:57:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.574 04:57:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.574 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:03:11.574 04:57:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.574 04:57:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:11.574 04:57:54 -- common/autotest_common.sh@10 -- # set +x 00:03:11.833 04:57:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:11.833 04:57:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.833 04:57:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.833 04:57:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:11.833 04:57:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.833 04:57:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.833 04:57:54 -- common/autotest_common.sh@1457 -- # uname 00:03:11.833 04:57:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:11.833 04:57:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.833 04:57:54 -- common/autotest_common.sh@1477 -- # uname 00:03:11.833 04:57:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:11.833 04:57:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.833 04:57:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:11.833 lcov: LCOV version 1.15 00:03:11.833 04:57:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:24.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:24.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:38.958 04:58:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:38.958 04:58:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.958 04:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:38.958 04:58:19 -- spdk/autotest.sh@78 -- # rm -f 00:03:38.958 04:58:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.339 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:40.340 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:40.600 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:40.600 04:58:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:40.600 04:58:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:40.600 04:58:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:40.600 04:58:23 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:40.600 04:58:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:40.600 04:58:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:40.600 04:58:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:40.600 04:58:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.600 04:58:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:40.600 04:58:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:40.600 04:58:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.600 04:58:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:40.600 04:58:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:40.600 04:58:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:40.600 04:58:23 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:40.860 No valid GPT data, bailing 00:03:40.860 04:58:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.860 04:58:23 -- scripts/common.sh@394 -- # pt= 00:03:40.860 04:58:23 -- scripts/common.sh@395 -- # return 1 00:03:40.860 04:58:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:40.860 1+0 records in 00:03:40.860 1+0 records out 00:03:40.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00172069 s, 609 MB/s 00:03:40.860 04:58:23 -- spdk/autotest.sh@105 -- # sync 00:03:40.860 04:58:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:40.860 04:58:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:40.860 04:58:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:48.992 04:58:30 -- spdk/autotest.sh@111 -- # uname -s 00:03:48.992 04:58:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:48.992 04:58:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:48.993 04:58:30 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.536 Hugepages 00:03:51.536 node hugesize free / total 00:03:51.536 node0 1048576kB 0 / 0 00:03:51.536 node0 2048kB 0 / 0 00:03:51.536 node1 1048576kB 0 / 0 00:03:51.536 node1 2048kB 0 / 0 00:03:51.536 00:03:51.536 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.536 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:51.536 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:51.536 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:51.796 04:58:34 -- spdk/autotest.sh@117 -- # uname -s 00:03:51.796 04:58:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:51.796 04:58:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:51.796 04:58:34 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.092 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.092 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.352 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.732 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.732 04:58:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:58.108 04:58:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:58.108 04:58:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:58.108 04:58:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.108 04:58:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:58.108 04:58:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:58.108 04:58:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:58.108 04:58:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.108 04:58:40 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.108 04:58:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:58.108 04:58:40 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:58.108 04:58:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:58.108 04:58:40 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.408 Waiting for block devices as requested 00:04:01.408 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:01.408 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:01.408 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:01.668 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:01.668 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:01.668 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:01.928 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:01.928 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:01.928 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:02.189 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:02.189 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:02.189 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:02.450 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:02.450 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:02.450 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:02.710 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:02.710 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:02.979 04:58:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.979 04:58:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:02.979 04:58:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:02.979 04:58:45 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:02.979 04:58:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:02.979 04:58:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:02.979 04:58:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:02.979 04:58:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.979 04:58:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.980 04:58:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.980 04:58:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.980 04:58:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.980 04:58:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.980 04:58:45 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:02.980 04:58:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.980 04:58:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.980 04:58:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.980 04:58:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.980 04:58:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.980 04:58:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.980 04:58:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.980 04:58:45 -- common/autotest_common.sh@1543 -- # continue 00:04:02.980 04:58:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.980 04:58:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.980 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.980 04:58:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.980 04:58:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.980 04:58:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.980 04:58:45 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.273 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.273 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.273 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.273 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.273 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.273 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.532 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:08.442 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.442 04:58:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.442 04:58:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.442 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.442 04:58:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.442 04:58:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.442 04:58:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.442 04:58:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.442 04:58:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.442 04:58:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.442 04:58:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.442 04:58:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.442 04:58:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.442 04:58:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.442 04:58:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.443 04:58:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:08.443 04:58:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.443 04:58:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:08.443 04:58:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:08.443 04:58:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.443 04:58:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:08.443 04:58:50 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:08.443 04:58:50 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:08.443 04:58:50 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:08.443 04:58:50 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:08.443 04:58:50 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:08.443 04:58:50 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:08.443 04:58:50 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=271222 00:04:08.443 04:58:50 -- common/autotest_common.sh@1585 -- # waitforlisten 271222 00:04:08.443 04:58:50 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.443 04:58:50 -- common/autotest_common.sh@835 -- # '[' -z 271222 ']' 00:04:08.443 04:58:50 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.443 04:58:50 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.443 04:58:50 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.443 04:58:50 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.443 04:58:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.443 [2024-12-09 04:58:50.812332] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:08.443 [2024-12-09 04:58:50.812389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271222 ] 00:04:08.443 [2024-12-09 04:58:50.906273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.703 [2024-12-09 04:58:50.947742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.272 04:58:51 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.272 04:58:51 -- common/autotest_common.sh@868 -- # return 0 00:04:09.272 04:58:51 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:09.272 04:58:51 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:09.272 04:58:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:12.568 nvme0n1 00:04:12.568 04:58:54 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:12.568 [2024-12-09 04:58:54.847884] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:12.568 request: 00:04:12.568 { 00:04:12.568 "nvme_ctrlr_name": "nvme0", 00:04:12.568 "password": "test", 00:04:12.568 "method": "bdev_nvme_opal_revert", 00:04:12.568 "req_id": 1 00:04:12.568 } 00:04:12.568 Got JSON-RPC error response 00:04:12.568 response: 00:04:12.568 { 00:04:12.568 "code": -32602, 00:04:12.568 "message": "Invalid parameters" 00:04:12.568 } 00:04:12.568 04:58:54 -- common/autotest_common.sh@1591 -- # true 00:04:12.568 04:58:54 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:12.569 04:58:54 -- common/autotest_common.sh@1595 -- # killprocess 271222 00:04:12.569 04:58:54 -- common/autotest_common.sh@954 -- # '[' -z 271222 ']' 00:04:12.569 04:58:54 -- common/autotest_common.sh@958 -- # kill -0 271222 00:04:12.569 04:58:54 -- common/autotest_common.sh@959 -- # uname 00:04:12.569 04:58:54 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.569 04:58:54 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271222 00:04:12.569 04:58:54 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.569 04:58:54 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.569 04:58:54 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271222' 00:04:12.569 killing process with pid 271222 00:04:12.569 04:58:54 -- common/autotest_common.sh@973 -- # kill 271222 00:04:12.569 04:58:54 -- common/autotest_common.sh@978 -- # wait 271222 00:04:15.117 04:58:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.117 04:58:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.117 04:58:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.117 04:58:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.117 04:58:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.117 04:58:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.117 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.117 04:58:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.117 04:58:57 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.117 04:58:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.117 04:58:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.117 04:58:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.117 ************************************ 00:04:15.117 START TEST env 00:04:15.117 ************************************ 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.117 * Looking for test storage... 00:04:15.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.117 04:58:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.117 04:58:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.117 04:58:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.117 04:58:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.117 04:58:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.117 04:58:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.117 04:58:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.117 04:58:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.117 04:58:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.117 04:58:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.117 04:58:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.117 04:58:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.117 04:58:57 env -- scripts/common.sh@345 -- # : 1 00:04:15.117 04:58:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.117 04:58:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.117 04:58:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.117 04:58:57 env -- scripts/common.sh@353 -- # local d=1 00:04:15.117 04:58:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.117 04:58:57 env -- scripts/common.sh@355 -- # echo 1 00:04:15.117 04:58:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.117 04:58:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.117 04:58:57 env -- scripts/common.sh@353 -- # local d=2 00:04:15.117 04:58:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.117 04:58:57 env -- scripts/common.sh@355 -- # echo 2 00:04:15.117 04:58:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.117 04:58:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.117 04:58:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.117 04:58:57 env -- scripts/common.sh@368 -- # return 0 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.117 --rc genhtml_branch_coverage=1 00:04:15.117 --rc genhtml_function_coverage=1 00:04:15.117 --rc genhtml_legend=1 00:04:15.117 --rc geninfo_all_blocks=1 00:04:15.117 --rc geninfo_unexecuted_blocks=1 00:04:15.117 00:04:15.117 ' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.117 --rc genhtml_branch_coverage=1 00:04:15.117 --rc genhtml_function_coverage=1 00:04:15.117 --rc genhtml_legend=1 00:04:15.117 --rc geninfo_all_blocks=1 00:04:15.117 --rc geninfo_unexecuted_blocks=1 00:04:15.117 00:04:15.117 ' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.117 --rc genhtml_branch_coverage=1 00:04:15.117 --rc genhtml_function_coverage=1 00:04:15.117 --rc genhtml_legend=1 00:04:15.117 --rc geninfo_all_blocks=1 00:04:15.117 --rc geninfo_unexecuted_blocks=1 00:04:15.117 00:04:15.117 ' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.117 --rc genhtml_branch_coverage=1 00:04:15.117 --rc genhtml_function_coverage=1 00:04:15.117 --rc genhtml_legend=1 00:04:15.117 --rc geninfo_all_blocks=1 00:04:15.117 --rc geninfo_unexecuted_blocks=1 00:04:15.117 00:04:15.117 ' 00:04:15.117 04:58:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.117 04:58:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.117 04:58:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.117 ************************************ 00:04:15.117 START TEST env_memory 00:04:15.117 ************************************ 00:04:15.117 04:58:57 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.117 00:04:15.117 00:04:15.117 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.117 http://cunit.sourceforge.net/ 00:04:15.117 00:04:15.117 00:04:15.117 Suite: mem_map_2mb 00:04:15.117 Test: alloc and free memory map ...[2024-12-09 04:58:57.500092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.117 passed 00:04:15.117 Test: mem map translation ...[2024-12-09 04:58:57.520067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.117 [2024-12-09 04:58:57.520081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.117 [2024-12-09 04:58:57.520143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.117 [2024-12-09 04:58:57.520151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.117 passed 00:04:15.117 Test: mem map registration ...[2024-12-09 04:58:57.559954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.117 [2024-12-09 04:58:57.559976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.117 passed 00:04:15.375 Test: mem map adjacent registrations ...passed 00:04:15.375 Suite: mem_map_4kb 00:04:15.375 Test: alloc and free memory map ...[2024-12-09 04:58:57.666903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 311:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.375 passed 00:04:15.375 Test: mem map translation ...[2024-12-09 04:58:57.690105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:04:15.375 [2024-12-09 04:58:57.690123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 629:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:04:15.375 [2024-12-09 04:58:57.708186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 623:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.375 [2024-12-09 04:58:57.708197] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 639:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:04:15.375 passed 00:04:15.375 Test: mem map registration ...[2024-12-09 04:58:57.779676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:04:15.375 [2024-12-09 04:58:57.779700] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 381:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:04:15.375 passed 00:04:15.635 Test: mem map adjacent registrations ...passed 00:04:15.635 00:04:15.635 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.635 suites 2 2 n/a 0 0 00:04:15.635 tests 8 8 8 0 0 00:04:15.635 asserts 304 304 304 0 n/a 00:04:15.635 00:04:15.635 Elapsed time = 0.398 seconds 00:04:15.635 00:04:15.635 real 0m0.413s 00:04:15.635 user 0m0.394s 00:04:15.635 sys 0m0.018s 00:04:15.635 04:58:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.635 04:58:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.635 ************************************ 00:04:15.635 END TEST env_memory 00:04:15.635 ************************************ 00:04:15.635 04:58:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.635 04:58:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.635 04:58:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.635 04:58:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.635 ************************************ 00:04:15.635 START TEST env_vtophys 00:04:15.635 ************************************ 00:04:15.635 04:58:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.635 EAL: lib.eal log level changed from notice to debug 00:04:15.635 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.635 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.635 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.635 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.635 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.635 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.635 EAL: Detected lcore 6 as core 6 on socket 0 00:04:15.635 EAL: Detected lcore 7 as core 8 on socket 0 00:04:15.635 EAL: Detected lcore 8 as core 9 on socket 0 00:04:15.635 EAL: Detected lcore 9 as core 10 on socket 0 00:04:15.635 EAL: Detected lcore 10 as core 11 on socket 0 00:04:15.635 EAL: Detected lcore 11 as core 12 on socket 0 00:04:15.635 EAL: Detected lcore 12 as core 13 on socket 0 00:04:15.635 EAL: Detected lcore 13 as core 14 on socket 0 00:04:15.635 EAL: Detected lcore 14 as core 16 on socket 0 00:04:15.635 EAL: Detected lcore 15 as core 17 on socket 0 00:04:15.635 EAL: Detected lcore 16 as core 18 on socket 0 00:04:15.635 EAL: Detected lcore 17 as core 19 on socket 0 00:04:15.635 EAL: Detected lcore 18 as core 20 on socket 0 00:04:15.635 EAL: Detected lcore 19 as core 21 on socket 0 00:04:15.635 EAL: Detected lcore 20 as core 22 on socket 0 00:04:15.635 EAL: Detected lcore 21 as core 24 on socket 0 00:04:15.635 EAL: Detected lcore 22 as core 25 on socket 0 00:04:15.635 EAL: Detected lcore 23 as core 26 on socket 0 00:04:15.635 EAL: Detected lcore 24 as core 27 on socket 0 00:04:15.635 EAL: Detected lcore 25 as core 28 on socket 0 00:04:15.635 EAL: Detected lcore 26 as core 29 on socket 0 00:04:15.635 EAL: Detected lcore 27 as core 30 on socket 0 00:04:15.635 EAL: Detected lcore 28 as core 0 on socket 1 00:04:15.635 EAL: Detected lcore 29 as core 1 on socket 1 00:04:15.635 EAL: Detected lcore 30 as core 2 on socket 1 00:04:15.635 EAL: Detected lcore 31 as core 3 on socket 1 00:04:15.635 EAL: Detected lcore 32 as core 4 on socket 1 00:04:15.635 EAL: Detected lcore 33 as core 5 on socket 1 00:04:15.635 EAL: Detected lcore 34 as core 6 on socket 1 00:04:15.635 EAL: Detected lcore 35 as core 8 on socket 1 00:04:15.635 EAL: Detected lcore 36 as core 9 on socket 1 00:04:15.635 EAL: Detected lcore 37 as core 10 on socket 1 00:04:15.635 EAL: Detected lcore 38 as core 11 on socket 1 00:04:15.635 EAL: Detected lcore 39 as core 12 on socket 1 00:04:15.635 EAL: Detected lcore 40 as core 13 on socket 1 00:04:15.635 EAL: Detected lcore 41 as core 14 on socket 1 00:04:15.635 EAL: Detected lcore 42 as core 16 on socket 1 00:04:15.635 EAL: Detected lcore 43 as core 17 on socket 1 00:04:15.635 EAL: Detected lcore 44 as core 18 on socket 1 00:04:15.635 EAL: Detected lcore 45 as core 19 on socket 1 00:04:15.635 EAL: Detected lcore 46 as core 20 on socket 1 00:04:15.635 EAL: Detected lcore 47 as core 21 on socket 1 00:04:15.635 EAL: Detected lcore 48 as core 22 on socket 1 00:04:15.635 EAL: Detected lcore 49 as core 24 on socket 1 00:04:15.635 EAL: Detected lcore 50 as core 25 on socket 1 00:04:15.635 EAL: Detected lcore 51 as core 26 on socket 1 00:04:15.635 EAL: Detected lcore 52 as core 27 on socket 1 00:04:15.635 EAL: Detected lcore 53 as core 28 on socket 1 00:04:15.635 EAL: Detected lcore 54 as core 29 on socket 1 00:04:15.635 EAL: Detected lcore 55 as core 30 on socket 1 00:04:15.635 EAL: Detected lcore 56 as core 0 on socket 0 00:04:15.635 EAL: Detected lcore 57 as core 1 on socket 0 00:04:15.635 EAL: Detected lcore 58 as core 2 on socket 0 00:04:15.635 EAL: Detected lcore 59 as core 3 on socket 0 00:04:15.635 EAL: Detected lcore 60 as core 4 on socket 0 00:04:15.635 EAL: Detected lcore 61 as core 5 on socket 0 00:04:15.635 EAL: Detected lcore 62 as core 6 on socket 0 00:04:15.635 EAL: Detected lcore 63 as core 8 on socket 0 00:04:15.635 EAL: Detected lcore 64 as core 9 on socket 0 00:04:15.635 EAL: Detected lcore 65 as core 10 on socket 0 00:04:15.635 EAL: Detected lcore 66 as core 11 on socket 0 00:04:15.635 EAL: Detected lcore 67 as core 12 on socket 0 00:04:15.635 EAL: Detected lcore 68 as core 13 on socket 0 00:04:15.635 EAL: Detected lcore 69 as core 14 on socket 0 00:04:15.635 EAL: Detected lcore 70 as core 16 on socket 0 00:04:15.635 EAL: Detected lcore 71 as core 17 on socket 0 00:04:15.635 EAL: Detected lcore 72 as core 18 on socket 0 00:04:15.635 EAL: Detected lcore 73 as core 19 on socket 0 00:04:15.635 EAL: Detected lcore 74 as core 20 on socket 0 00:04:15.635 EAL: Detected lcore 75 as core 21 on socket 0 00:04:15.635 EAL: Detected lcore 76 as core 22 on socket 0 00:04:15.635 EAL: Detected lcore 77 as core 24 on socket 0 00:04:15.635 EAL: Detected lcore 78 as core 25 on socket 0 00:04:15.635 EAL: Detected lcore 79 as core 26 on socket 0 00:04:15.635 EAL: Detected lcore 80 as core 27 on socket 0 00:04:15.635 EAL: Detected lcore 81 as core 28 on socket 0 00:04:15.635 EAL: Detected lcore 82 as core 29 on socket 0 00:04:15.635 EAL: Detected lcore 83 as core 30 on socket 0 00:04:15.635 EAL: Detected lcore 84 as core 0 on socket 1 00:04:15.635 EAL: Detected lcore 85 as core 1 on socket 1 00:04:15.635 EAL: Detected lcore 86 as core 2 on socket 1 00:04:15.635 EAL: Detected lcore 87 as core 3 on socket 1 00:04:15.635 EAL: Detected lcore 88 as core 4 on socket 1 00:04:15.635 EAL: Detected lcore 89 as core 5 on socket 1 00:04:15.635 EAL: Detected lcore 90 as core 6 on socket 1 00:04:15.635 EAL: Detected lcore 91 as core 8 on socket 1 00:04:15.635 EAL: Detected lcore 92 as core 9 on socket 1 00:04:15.635 EAL: Detected lcore 93 as core 10 on socket 1 00:04:15.635 EAL: Detected lcore 94 as core 11 on socket 1 00:04:15.635 EAL: Detected lcore 95 as core 12 on socket 1 00:04:15.635 EAL: Detected lcore 96 as core 13 on socket 1 00:04:15.635 EAL: Detected lcore 97 as core 14 on socket 1 00:04:15.635 EAL: Detected lcore 98 as core 16 on socket 1 00:04:15.635 EAL: Detected lcore 99 as core 17 on socket 1 00:04:15.635 EAL: Detected lcore 100 as core 18 on socket 1 00:04:15.635 EAL: Detected lcore 101 as core 19 on socket 1 00:04:15.635 EAL: Detected lcore 102 as core 20 on socket 1 00:04:15.635 EAL: Detected lcore 103 as core 21 on socket 1 00:04:15.635 EAL: Detected lcore 104 as core 22 on socket 1 00:04:15.635 EAL: Detected lcore 105 as core 24 on socket 1 00:04:15.635 EAL: Detected lcore 106 as core 25 on socket 1 00:04:15.635 EAL: Detected lcore 107 as core 26 on socket 1 00:04:15.635 EAL: Detected lcore 108 as core 27 on socket 1 00:04:15.635 EAL: Detected lcore 109 as core 28 on socket 1 00:04:15.635 EAL: Detected lcore 110 as core 29 on socket 1 00:04:15.635 EAL: Detected lcore 111 as core 30 on socket 1 00:04:15.635 EAL: Maximum logical cores by configuration: 128 00:04:15.635 EAL: Detected CPU lcores: 112 00:04:15.635 EAL: Detected NUMA nodes: 2 00:04:15.635 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.635 EAL: Detected shared linkage of DPDK 00:04:15.635 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.635 EAL: Bus pci wants IOVA as 'DC' 00:04:15.635 EAL: Buses did not request a specific IOVA mode. 00:04:15.635 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.635 EAL: Selected IOVA mode 'VA' 00:04:15.636 EAL: Probing VFIO support... 00:04:15.636 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.636 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.636 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.636 EAL: VFIO support initialized 00:04:15.636 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.636 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.636 EAL: Setting up physically contiguous memory... 00:04:15.636 EAL: Setting maximum number of open files to 524288 00:04:15.636 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.636 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.636 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.636 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.636 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.636 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.636 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.636 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.636 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.636 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.636 EAL: Hugepages will be freed exactly as allocated. 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: TSC frequency is ~2500000 KHz 00:04:15.636 EAL: Main lcore 0 is ready (tid=7f75f05e8a00;cpuset=[0]) 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 0 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:15.636 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.636 00:04:15.636 00:04:15.636 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.636 http://cunit.sourceforge.net/ 00:04:15.636 00:04:15.636 00:04:15.636 Suite: components_suite 00:04:15.636 Test: vtophys_malloc_test ...passed 00:04:15.636 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.636 EAL: Trying to obtain current memory policy. 00:04:15.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.636 EAL: Restoring previous memory policy: 4 00:04:15.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.636 EAL: request: mp_malloc_sync 00:04:15.636 EAL: No shared files mode enabled, IPC is disabled 00:04:15.636 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.896 EAL: request: mp_malloc_sync 00:04:15.896 EAL: No shared files mode enabled, IPC is disabled 00:04:15.896 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.896 EAL: Trying to obtain current memory policy. 00:04:15.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.896 EAL: Restoring previous memory policy: 4 00:04:15.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.896 EAL: request: mp_malloc_sync 00:04:15.896 EAL: No shared files mode enabled, IPC is disabled 00:04:15.896 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.896 EAL: request: mp_malloc_sync 00:04:15.896 EAL: No shared files mode enabled, IPC is disabled 00:04:15.896 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.896 EAL: Trying to obtain current memory policy. 00:04:15.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.896 EAL: Restoring previous memory policy: 4 00:04:15.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.896 EAL: request: mp_malloc_sync 00:04:15.896 EAL: No shared files mode enabled, IPC is disabled 00:04:15.896 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.896 EAL: request: mp_malloc_sync 00:04:15.896 EAL: No shared files mode enabled, IPC is disabled 00:04:15.896 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.896 EAL: Trying to obtain current memory policy. 00:04:15.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.156 EAL: Restoring previous memory policy: 4 00:04:16.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.156 EAL: request: mp_malloc_sync 00:04:16.156 EAL: No shared files mode enabled, IPC is disabled 00:04:16.156 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.156 EAL: request: mp_malloc_sync 00:04:16.156 EAL: No shared files mode enabled, IPC is disabled 00:04:16.156 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.156 EAL: Trying to obtain current memory policy. 00:04:16.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.428 EAL: Restoring previous memory policy: 4 00:04:16.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.428 EAL: request: mp_malloc_sync 00:04:16.428 EAL: No shared files mode enabled, IPC is disabled 00:04:16.428 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.687 EAL: request: mp_malloc_sync 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.687 passed 00:04:16.687 00:04:16.687 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.687 suites 1 1 n/a 0 0 00:04:16.687 tests 2 2 2 0 0 00:04:16.687 asserts 497 497 497 0 n/a 00:04:16.687 00:04:16.687 Elapsed time = 0.972 seconds 00:04:16.687 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.687 EAL: request: mp_malloc_sync 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 EAL: No shared files mode enabled, IPC is disabled 00:04:16.687 00:04:16.687 real 0m1.132s 00:04:16.687 user 0m0.656s 00:04:16.687 sys 0m0.443s 00:04:16.687 04:58:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.687 04:58:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.687 ************************************ 00:04:16.687 END TEST env_vtophys 00:04:16.687 ************************************ 00:04:16.687 04:58:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.687 04:58:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.687 04:58:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.687 04:58:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.947 ************************************ 00:04:16.947 START TEST env_pci 00:04:16.947 ************************************ 00:04:16.947 04:58:59 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.947 00:04:16.947 00:04:16.947 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.947 http://cunit.sourceforge.net/ 00:04:16.947 00:04:16.947 00:04:16.947 Suite: pci 00:04:16.947 Test: pci_hook ...[2024-12-09 04:58:59.180453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 272777 has claimed it 00:04:16.947 EAL: Cannot find device (10000:00:01.0) 00:04:16.947 EAL: Failed to attach device on primary process 00:04:16.947 passed 00:04:16.947 00:04:16.947 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.947 suites 1 1 n/a 0 0 00:04:16.947 tests 1 1 1 0 0 00:04:16.947 asserts 25 25 25 0 n/a 00:04:16.947 00:04:16.947 Elapsed time = 0.034 seconds 00:04:16.947 00:04:16.947 real 0m0.058s 00:04:16.947 user 0m0.017s 00:04:16.947 sys 0m0.041s 00:04:16.947 04:58:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.947 04:58:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.947 ************************************ 00:04:16.947 END TEST env_pci 00:04:16.947 ************************************ 00:04:16.947 04:58:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.947 04:58:59 env -- env/env.sh@15 -- # uname 00:04:16.947 04:58:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.947 04:58:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.947 04:58:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.947 04:58:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:16.947 04:58:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.947 04:58:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.947 ************************************ 00:04:16.947 START TEST env_dpdk_post_init 00:04:16.947 ************************************ 00:04:16.947 04:58:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.948 EAL: Detected CPU lcores: 112 00:04:16.948 EAL: Detected NUMA nodes: 2 00:04:16.948 EAL: Detected shared linkage of DPDK 00:04:16.948 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.948 EAL: Selected IOVA mode 'VA' 00:04:16.948 EAL: VFIO support initialized 00:04:16.948 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.207 EAL: Using IOMMU type 1 (Type 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:17.207 EAL: Ignore mapping IO port bar(1) 00:04:17.207 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:18.147 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:22.342 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:22.342 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:22.342 Starting DPDK initialization... 00:04:22.342 Starting SPDK post initialization... 00:04:22.342 SPDK NVMe probe 00:04:22.342 Attaching to 0000:d8:00.0 00:04:22.342 Attached to 0000:d8:00.0 00:04:22.342 Cleaning up... 00:04:22.342 00:04:22.342 real 0m4.982s 00:04:22.342 user 0m3.424s 00:04:22.342 sys 0m0.603s 00:04:22.342 04:59:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.342 04:59:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.342 ************************************ 00:04:22.342 END TEST env_dpdk_post_init 00:04:22.342 ************************************ 00:04:22.342 04:59:04 env -- env/env.sh@26 -- # uname 00:04:22.342 04:59:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.342 04:59:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.342 04:59:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.342 04:59:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.342 04:59:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.342 ************************************ 00:04:22.342 START TEST env_mem_callbacks 00:04:22.342 ************************************ 00:04:22.342 04:59:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.342 EAL: Detected CPU lcores: 112 00:04:22.342 EAL: Detected NUMA nodes: 2 00:04:22.342 EAL: Detected shared linkage of DPDK 00:04:22.342 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.342 EAL: Selected IOVA mode 'VA' 00:04:22.342 EAL: VFIO support initialized 00:04:22.342 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.342 00:04:22.342 00:04:22.342 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.342 http://cunit.sourceforge.net/ 00:04:22.342 00:04:22.342 00:04:22.342 Suite: memory 00:04:22.342 Test: test ... 00:04:22.343 register 0x200000200000 2097152 00:04:22.343 malloc 3145728 00:04:22.343 register 0x200000400000 4194304 00:04:22.343 buf 0x200000500000 len 3145728 PASSED 00:04:22.343 malloc 64 00:04:22.343 buf 0x2000004fff40 len 64 PASSED 00:04:22.343 malloc 4194304 00:04:22.343 register 0x200000800000 6291456 00:04:22.343 buf 0x200000a00000 len 4194304 PASSED 00:04:22.343 free 0x200000500000 3145728 00:04:22.343 free 0x2000004fff40 64 00:04:22.343 unregister 0x200000400000 4194304 PASSED 00:04:22.343 free 0x200000a00000 4194304 00:04:22.343 unregister 0x200000800000 6291456 PASSED 00:04:22.343 malloc 8388608 00:04:22.343 register 0x200000400000 10485760 00:04:22.343 buf 0x200000600000 len 8388608 PASSED 00:04:22.343 free 0x200000600000 8388608 00:04:22.343 unregister 0x200000400000 10485760 PASSED 00:04:22.343 passed 00:04:22.343 00:04:22.343 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.343 suites 1 1 n/a 0 0 00:04:22.343 tests 1 1 1 0 0 00:04:22.343 asserts 15 15 15 0 n/a 00:04:22.343 00:04:22.343 Elapsed time = 0.009 seconds 00:04:22.343 00:04:22.343 real 0m0.074s 00:04:22.343 user 0m0.023s 00:04:22.343 sys 0m0.051s 00:04:22.343 04:59:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.343 04:59:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.343 ************************************ 00:04:22.343 END TEST env_mem_callbacks 00:04:22.343 ************************************ 00:04:22.343 00:04:22.343 real 0m7.280s 00:04:22.343 user 0m4.772s 00:04:22.343 sys 0m1.568s 00:04:22.343 04:59:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.343 04:59:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.343 ************************************ 00:04:22.343 END TEST env 00:04:22.343 ************************************ 00:04:22.343 04:59:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.343 04:59:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.343 04:59:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.343 04:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.343 ************************************ 00:04:22.343 START TEST rpc 00:04:22.343 ************************************ 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.343 * Looking for test storage... 00:04:22.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.343 04:59:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.343 04:59:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.343 04:59:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.343 04:59:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.343 04:59:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.343 04:59:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.343 04:59:04 rpc -- scripts/common.sh@345 -- # : 1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.343 04:59:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.343 04:59:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.343 04:59:04 rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.343 04:59:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.343 04:59:04 rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.343 04:59:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.343 04:59:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.343 04:59:04 rpc -- scripts/common.sh@368 -- # return 0 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.343 --rc genhtml_branch_coverage=1 00:04:22.343 --rc genhtml_function_coverage=1 00:04:22.343 --rc genhtml_legend=1 00:04:22.343 --rc geninfo_all_blocks=1 00:04:22.343 --rc geninfo_unexecuted_blocks=1 00:04:22.343 00:04:22.343 ' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.343 --rc genhtml_branch_coverage=1 00:04:22.343 --rc genhtml_function_coverage=1 00:04:22.343 --rc genhtml_legend=1 00:04:22.343 --rc geninfo_all_blocks=1 00:04:22.343 --rc geninfo_unexecuted_blocks=1 00:04:22.343 00:04:22.343 ' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.343 --rc genhtml_branch_coverage=1 00:04:22.343 --rc genhtml_function_coverage=1 00:04:22.343 --rc genhtml_legend=1 00:04:22.343 --rc geninfo_all_blocks=1 00:04:22.343 --rc geninfo_unexecuted_blocks=1 00:04:22.343 00:04:22.343 ' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.343 --rc genhtml_branch_coverage=1 00:04:22.343 --rc genhtml_function_coverage=1 00:04:22.343 --rc genhtml_legend=1 00:04:22.343 --rc geninfo_all_blocks=1 00:04:22.343 --rc geninfo_unexecuted_blocks=1 00:04:22.343 00:04:22.343 ' 00:04:22.343 04:59:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=273916 00:04:22.343 04:59:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:22.343 04:59:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.343 04:59:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 273916 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 273916 ']' 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.343 04:59:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.603 [2024-12-09 04:59:04.839820] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:22.603 [2024-12-09 04:59:04.839876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273916 ] 00:04:22.603 [2024-12-09 04:59:04.929196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.603 [2024-12-09 04:59:04.970071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.603 [2024-12-09 04:59:04.970109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 273916' to capture a snapshot of events at runtime. 00:04:22.603 [2024-12-09 04:59:04.970119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.603 [2024-12-09 04:59:04.970128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.603 [2024-12-09 04:59:04.970151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid273916 for offline analysis/debug. 00:04:22.603 [2024-12-09 04:59:04.970723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.541 04:59:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.541 04:59:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:23.541 04:59:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.541 04:59:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.541 04:59:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.541 04:59:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.541 04:59:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.541 04:59:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.541 04:59:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.541 ************************************ 00:04:23.541 START TEST rpc_integrity 00:04:23.541 ************************************ 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.541 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.541 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.541 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.541 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.541 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.541 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.542 { 00:04:23.542 "name": "Malloc0", 00:04:23.542 "aliases": [ 00:04:23.542 "ebbe1f99-c3b0-4b90-bf60-ce9e08d06666" 00:04:23.542 ], 00:04:23.542 "product_name": "Malloc disk", 00:04:23.542 "block_size": 512, 00:04:23.542 "num_blocks": 16384, 00:04:23.542 "uuid": "ebbe1f99-c3b0-4b90-bf60-ce9e08d06666", 00:04:23.542 "assigned_rate_limits": { 00:04:23.542 "rw_ios_per_sec": 0, 00:04:23.542 "rw_mbytes_per_sec": 0, 00:04:23.542 "r_mbytes_per_sec": 0, 00:04:23.542 "w_mbytes_per_sec": 0 00:04:23.542 }, 00:04:23.542 "claimed": false, 00:04:23.542 "zoned": false, 00:04:23.542 "supported_io_types": { 00:04:23.542 "read": true, 00:04:23.542 "write": true, 00:04:23.542 "unmap": true, 00:04:23.542 "flush": true, 00:04:23.542 "reset": true, 00:04:23.542 "nvme_admin": false, 00:04:23.542 "nvme_io": false, 00:04:23.542 "nvme_io_md": false, 00:04:23.542 "write_zeroes": true, 00:04:23.542 "zcopy": true, 00:04:23.542 "get_zone_info": false, 00:04:23.542 "zone_management": false, 00:04:23.542 "zone_append": false, 00:04:23.542 "compare": false, 00:04:23.542 "compare_and_write": false, 00:04:23.542 "abort": true, 00:04:23.542 "seek_hole": false, 00:04:23.542 "seek_data": false, 00:04:23.542 "copy": true, 00:04:23.542 "nvme_iov_md": false 00:04:23.542 }, 00:04:23.542 "memory_domains": [ 00:04:23.542 { 00:04:23.542 "dma_device_id": "system", 00:04:23.542 "dma_device_type": 1 00:04:23.542 }, 00:04:23.542 { 00:04:23.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.542 "dma_device_type": 2 00:04:23.542 } 00:04:23.542 ], 00:04:23.542 "driver_specific": {} 00:04:23.542 } 00:04:23.542 ]' 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 [2024-12-09 04:59:05.850513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.542 [2024-12-09 04:59:05.850541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.542 [2024-12-09 04:59:05.850554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x608800 00:04:23.542 [2024-12-09 04:59:05.850562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.542 [2024-12-09 04:59:05.851673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.542 [2024-12-09 04:59:05.851695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.542 Passthru0 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.542 { 00:04:23.542 "name": "Malloc0", 00:04:23.542 "aliases": [ 00:04:23.542 "ebbe1f99-c3b0-4b90-bf60-ce9e08d06666" 00:04:23.542 ], 00:04:23.542 "product_name": "Malloc disk", 00:04:23.542 "block_size": 512, 00:04:23.542 "num_blocks": 16384, 00:04:23.542 "uuid": "ebbe1f99-c3b0-4b90-bf60-ce9e08d06666", 00:04:23.542 "assigned_rate_limits": { 00:04:23.542 "rw_ios_per_sec": 0, 00:04:23.542 "rw_mbytes_per_sec": 0, 00:04:23.542 "r_mbytes_per_sec": 0, 00:04:23.542 "w_mbytes_per_sec": 0 00:04:23.542 }, 00:04:23.542 "claimed": true, 00:04:23.542 "claim_type": "exclusive_write", 00:04:23.542 "zoned": false, 00:04:23.542 "supported_io_types": { 00:04:23.542 "read": true, 00:04:23.542 "write": true, 00:04:23.542 "unmap": true, 00:04:23.542 "flush": true, 00:04:23.542 "reset": true, 00:04:23.542 "nvme_admin": false, 00:04:23.542 "nvme_io": false, 00:04:23.542 "nvme_io_md": false, 00:04:23.542 "write_zeroes": true, 00:04:23.542 "zcopy": true, 00:04:23.542 "get_zone_info": false, 00:04:23.542 "zone_management": false, 00:04:23.542 "zone_append": false, 00:04:23.542 "compare": false, 00:04:23.542 "compare_and_write": false, 00:04:23.542 "abort": true, 00:04:23.542 "seek_hole": false, 00:04:23.542 "seek_data": false, 00:04:23.542 "copy": true, 00:04:23.542 "nvme_iov_md": false 00:04:23.542 }, 00:04:23.542 "memory_domains": [ 00:04:23.542 { 00:04:23.542 "dma_device_id": "system", 00:04:23.542 "dma_device_type": 1 00:04:23.542 }, 00:04:23.542 { 00:04:23.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.542 "dma_device_type": 2 00:04:23.542 } 00:04:23.542 ], 00:04:23.542 "driver_specific": {} 00:04:23.542 }, 00:04:23.542 { 00:04:23.542 "name": "Passthru0", 00:04:23.542 "aliases": [ 00:04:23.542 "ea972b0a-5058-5283-bdf5-7e9e64f03d9b" 00:04:23.542 ], 00:04:23.542 "product_name": "passthru", 00:04:23.542 "block_size": 512, 00:04:23.542 "num_blocks": 16384, 00:04:23.542 "uuid": "ea972b0a-5058-5283-bdf5-7e9e64f03d9b", 00:04:23.542 "assigned_rate_limits": { 00:04:23.542 "rw_ios_per_sec": 0, 00:04:23.542 "rw_mbytes_per_sec": 0, 00:04:23.542 "r_mbytes_per_sec": 0, 00:04:23.542 "w_mbytes_per_sec": 0 00:04:23.542 }, 00:04:23.542 "claimed": false, 00:04:23.542 "zoned": false, 00:04:23.542 "supported_io_types": { 00:04:23.542 "read": true, 00:04:23.542 "write": true, 00:04:23.542 "unmap": true, 00:04:23.542 "flush": true, 00:04:23.542 "reset": true, 00:04:23.542 "nvme_admin": false, 00:04:23.542 "nvme_io": false, 00:04:23.542 "nvme_io_md": false, 00:04:23.542 "write_zeroes": true, 00:04:23.542 "zcopy": true, 00:04:23.542 "get_zone_info": false, 00:04:23.542 "zone_management": false, 00:04:23.542 "zone_append": false, 00:04:23.542 "compare": false, 00:04:23.542 "compare_and_write": false, 00:04:23.542 "abort": true, 00:04:23.542 "seek_hole": false, 00:04:23.542 "seek_data": false, 00:04:23.542 "copy": true, 00:04:23.542 "nvme_iov_md": false 00:04:23.542 }, 00:04:23.542 "memory_domains": [ 00:04:23.542 { 00:04:23.542 "dma_device_id": "system", 00:04:23.542 "dma_device_type": 1 00:04:23.542 }, 00:04:23.542 { 00:04:23.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.542 "dma_device_type": 2 00:04:23.542 } 00:04:23.542 ], 00:04:23.542 "driver_specific": { 00:04:23.542 "passthru": { 00:04:23.542 "name": "Passthru0", 00:04:23.542 "base_bdev_name": "Malloc0" 00:04:23.542 } 00:04:23.542 } 00:04:23.542 } 00:04:23.542 ]' 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 04:59:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.542 04:59:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.542 04:59:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.542 00:04:23.542 real 0m0.296s 00:04:23.542 user 0m0.185s 00:04:23.542 sys 0m0.051s 00:04:23.542 04:59:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.542 04:59:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.542 ************************************ 00:04:23.542 END TEST rpc_integrity 00:04:23.542 ************************************ 00:04:23.801 04:59:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.801 04:59:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.801 04:59:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.801 04:59:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.801 ************************************ 00:04:23.801 START TEST rpc_plugins 00:04:23.801 ************************************ 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.801 { 00:04:23.801 "name": "Malloc1", 00:04:23.801 "aliases": [ 00:04:23.801 "69ae4632-7a42-49ea-b94a-f6da38e36497" 00:04:23.801 ], 00:04:23.801 "product_name": "Malloc disk", 00:04:23.801 "block_size": 4096, 00:04:23.801 "num_blocks": 256, 00:04:23.801 "uuid": "69ae4632-7a42-49ea-b94a-f6da38e36497", 00:04:23.801 "assigned_rate_limits": { 00:04:23.801 "rw_ios_per_sec": 0, 00:04:23.801 "rw_mbytes_per_sec": 0, 00:04:23.801 "r_mbytes_per_sec": 0, 00:04:23.801 "w_mbytes_per_sec": 0 00:04:23.801 }, 00:04:23.801 "claimed": false, 00:04:23.801 "zoned": false, 00:04:23.801 "supported_io_types": { 00:04:23.801 "read": true, 00:04:23.801 "write": true, 00:04:23.801 "unmap": true, 00:04:23.801 "flush": true, 00:04:23.801 "reset": true, 00:04:23.801 "nvme_admin": false, 00:04:23.801 "nvme_io": false, 00:04:23.801 "nvme_io_md": false, 00:04:23.801 "write_zeroes": true, 00:04:23.801 "zcopy": true, 00:04:23.801 "get_zone_info": false, 00:04:23.801 "zone_management": false, 00:04:23.801 "zone_append": false, 00:04:23.801 "compare": false, 00:04:23.801 "compare_and_write": false, 00:04:23.801 "abort": true, 00:04:23.801 "seek_hole": false, 00:04:23.801 "seek_data": false, 00:04:23.801 "copy": true, 00:04:23.801 "nvme_iov_md": false 00:04:23.801 }, 00:04:23.801 "memory_domains": [ 00:04:23.801 { 00:04:23.801 "dma_device_id": "system", 00:04:23.801 "dma_device_type": 1 00:04:23.801 }, 00:04:23.801 { 00:04:23.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.801 "dma_device_type": 2 00:04:23.801 } 00:04:23.801 ], 00:04:23.801 "driver_specific": {} 00:04:23.801 } 00:04:23.801 ]' 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.801 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.801 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.802 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.802 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.802 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.802 04:59:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.802 00:04:23.802 real 0m0.144s 00:04:23.802 user 0m0.081s 00:04:23.802 sys 0m0.027s 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.802 04:59:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.802 ************************************ 00:04:23.802 END TEST rpc_plugins 00:04:23.802 ************************************ 00:04:23.802 04:59:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.802 04:59:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.802 04:59:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.802 04:59:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.060 ************************************ 00:04:24.060 START TEST rpc_trace_cmd_test 00:04:24.060 ************************************ 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.060 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid273916", 00:04:24.060 "tpoint_group_mask": "0x8", 00:04:24.060 "iscsi_conn": { 00:04:24.060 "mask": "0x2", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "scsi": { 00:04:24.060 "mask": "0x4", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "bdev": { 00:04:24.060 "mask": "0x8", 00:04:24.060 "tpoint_mask": "0xffffffffffffffff" 00:04:24.060 }, 00:04:24.060 "nvmf_rdma": { 00:04:24.060 "mask": "0x10", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "nvmf_tcp": { 00:04:24.060 "mask": "0x20", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "ftl": { 00:04:24.060 "mask": "0x40", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "blobfs": { 00:04:24.060 "mask": "0x80", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "dsa": { 00:04:24.060 "mask": "0x200", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "thread": { 00:04:24.060 "mask": "0x400", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "nvme_pcie": { 00:04:24.060 "mask": "0x800", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "iaa": { 00:04:24.060 "mask": "0x1000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "nvme_tcp": { 00:04:24.060 "mask": "0x2000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "bdev_nvme": { 00:04:24.060 "mask": "0x4000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "sock": { 00:04:24.060 "mask": "0x8000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "blob": { 00:04:24.060 "mask": "0x10000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "bdev_raid": { 00:04:24.060 "mask": "0x20000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 }, 00:04:24.060 "scheduler": { 00:04:24.060 "mask": "0x40000", 00:04:24.060 "tpoint_mask": "0x0" 00:04:24.060 } 00:04:24.060 }' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.060 00:04:24.060 real 0m0.229s 00:04:24.060 user 0m0.186s 00:04:24.060 sys 0m0.036s 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.060 04:59:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.060 ************************************ 00:04:24.060 END TEST rpc_trace_cmd_test 00:04:24.060 ************************************ 00:04:24.319 04:59:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.319 04:59:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.319 04:59:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.319 04:59:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.319 04:59:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.319 04:59:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 ************************************ 00:04:24.319 START TEST rpc_daemon_integrity 00:04:24.319 ************************************ 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.319 { 00:04:24.319 "name": "Malloc2", 00:04:24.319 "aliases": [ 00:04:24.319 "c771b6ad-2b8d-4f8c-8c0a-f3ee6f440c4d" 00:04:24.319 ], 00:04:24.319 "product_name": "Malloc disk", 00:04:24.319 "block_size": 512, 00:04:24.319 "num_blocks": 16384, 00:04:24.319 "uuid": "c771b6ad-2b8d-4f8c-8c0a-f3ee6f440c4d", 00:04:24.319 "assigned_rate_limits": { 00:04:24.319 "rw_ios_per_sec": 0, 00:04:24.319 "rw_mbytes_per_sec": 0, 00:04:24.319 "r_mbytes_per_sec": 0, 00:04:24.319 "w_mbytes_per_sec": 0 00:04:24.319 }, 00:04:24.319 "claimed": false, 00:04:24.319 "zoned": false, 00:04:24.319 "supported_io_types": { 00:04:24.319 "read": true, 00:04:24.319 "write": true, 00:04:24.319 "unmap": true, 00:04:24.319 "flush": true, 00:04:24.319 "reset": true, 00:04:24.319 "nvme_admin": false, 00:04:24.319 "nvme_io": false, 00:04:24.319 "nvme_io_md": false, 00:04:24.319 "write_zeroes": true, 00:04:24.319 "zcopy": true, 00:04:24.319 "get_zone_info": false, 00:04:24.319 "zone_management": false, 00:04:24.319 "zone_append": false, 00:04:24.319 "compare": false, 00:04:24.319 "compare_and_write": false, 00:04:24.319 "abort": true, 00:04:24.319 "seek_hole": false, 00:04:24.319 "seek_data": false, 00:04:24.319 "copy": true, 00:04:24.319 "nvme_iov_md": false 00:04:24.319 }, 00:04:24.319 "memory_domains": [ 00:04:24.319 { 00:04:24.319 "dma_device_id": "system", 00:04:24.319 "dma_device_type": 1 00:04:24.319 }, 00:04:24.319 { 00:04:24.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.319 "dma_device_type": 2 00:04:24.319 } 00:04:24.319 ], 00:04:24.319 "driver_specific": {} 00:04:24.319 } 00:04:24.319 ]' 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 [2024-12-09 04:59:06.740901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.319 [2024-12-09 04:59:06.740928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.319 [2024-12-09 04:59:06.740943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6084c0 00:04:24.319 [2024-12-09 04:59:06.740952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.319 [2024-12-09 04:59:06.741887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.319 [2024-12-09 04:59:06.741908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.319 Passthru0 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.319 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.319 { 00:04:24.319 "name": "Malloc2", 00:04:24.319 "aliases": [ 00:04:24.319 "c771b6ad-2b8d-4f8c-8c0a-f3ee6f440c4d" 00:04:24.319 ], 00:04:24.319 "product_name": "Malloc disk", 00:04:24.319 "block_size": 512, 00:04:24.319 "num_blocks": 16384, 00:04:24.319 "uuid": "c771b6ad-2b8d-4f8c-8c0a-f3ee6f440c4d", 00:04:24.319 "assigned_rate_limits": { 00:04:24.319 "rw_ios_per_sec": 0, 00:04:24.319 "rw_mbytes_per_sec": 0, 00:04:24.319 "r_mbytes_per_sec": 0, 00:04:24.319 "w_mbytes_per_sec": 0 00:04:24.319 }, 00:04:24.319 "claimed": true, 00:04:24.319 "claim_type": "exclusive_write", 00:04:24.319 "zoned": false, 00:04:24.319 "supported_io_types": { 00:04:24.319 "read": true, 00:04:24.319 "write": true, 00:04:24.319 "unmap": true, 00:04:24.319 "flush": true, 00:04:24.319 "reset": true, 00:04:24.319 "nvme_admin": false, 00:04:24.319 "nvme_io": false, 00:04:24.319 "nvme_io_md": false, 00:04:24.319 "write_zeroes": true, 00:04:24.319 "zcopy": true, 00:04:24.319 "get_zone_info": false, 00:04:24.319 "zone_management": false, 00:04:24.319 "zone_append": false, 00:04:24.319 "compare": false, 00:04:24.319 "compare_and_write": false, 00:04:24.319 "abort": true, 00:04:24.319 "seek_hole": false, 00:04:24.319 "seek_data": false, 00:04:24.319 "copy": true, 00:04:24.319 "nvme_iov_md": false 00:04:24.319 }, 00:04:24.319 "memory_domains": [ 00:04:24.319 { 00:04:24.319 "dma_device_id": "system", 00:04:24.319 "dma_device_type": 1 00:04:24.319 }, 00:04:24.319 { 00:04:24.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.319 "dma_device_type": 2 00:04:24.319 } 00:04:24.319 ], 00:04:24.319 "driver_specific": {} 00:04:24.319 }, 00:04:24.319 { 00:04:24.319 "name": "Passthru0", 00:04:24.319 "aliases": [ 00:04:24.319 "d818654f-6ece-5b9e-99ba-9c93a4fc24e3" 00:04:24.319 ], 00:04:24.319 "product_name": "passthru", 00:04:24.319 "block_size": 512, 00:04:24.319 "num_blocks": 16384, 00:04:24.319 "uuid": "d818654f-6ece-5b9e-99ba-9c93a4fc24e3", 00:04:24.319 "assigned_rate_limits": { 00:04:24.319 "rw_ios_per_sec": 0, 00:04:24.319 "rw_mbytes_per_sec": 0, 00:04:24.319 "r_mbytes_per_sec": 0, 00:04:24.319 "w_mbytes_per_sec": 0 00:04:24.319 }, 00:04:24.319 "claimed": false, 00:04:24.319 "zoned": false, 00:04:24.319 "supported_io_types": { 00:04:24.319 "read": true, 00:04:24.319 "write": true, 00:04:24.319 "unmap": true, 00:04:24.319 "flush": true, 00:04:24.319 "reset": true, 00:04:24.319 "nvme_admin": false, 00:04:24.319 "nvme_io": false, 00:04:24.319 "nvme_io_md": false, 00:04:24.319 "write_zeroes": true, 00:04:24.319 "zcopy": true, 00:04:24.319 "get_zone_info": false, 00:04:24.319 "zone_management": false, 00:04:24.319 "zone_append": false, 00:04:24.319 "compare": false, 00:04:24.319 "compare_and_write": false, 00:04:24.319 "abort": true, 00:04:24.319 "seek_hole": false, 00:04:24.319 "seek_data": false, 00:04:24.319 "copy": true, 00:04:24.319 "nvme_iov_md": false 00:04:24.319 }, 00:04:24.319 "memory_domains": [ 00:04:24.319 { 00:04:24.319 "dma_device_id": "system", 00:04:24.319 "dma_device_type": 1 00:04:24.319 }, 00:04:24.320 { 00:04:24.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.320 "dma_device_type": 2 00:04:24.320 } 00:04:24.320 ], 00:04:24.320 "driver_specific": { 00:04:24.320 "passthru": { 00:04:24.320 "name": "Passthru0", 00:04:24.320 "base_bdev_name": "Malloc2" 00:04:24.320 } 00:04:24.320 } 00:04:24.320 } 00:04:24.320 ]' 00:04:24.320 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.579 00:04:24.579 real 0m0.285s 00:04:24.579 user 0m0.175s 00:04:24.579 sys 0m0.053s 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.579 04:59:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.579 ************************************ 00:04:24.579 END TEST rpc_daemon_integrity 00:04:24.579 ************************************ 00:04:24.580 04:59:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.580 04:59:06 rpc -- rpc/rpc.sh@84 -- # killprocess 273916 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 273916 ']' 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@958 -- # kill -0 273916 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273916 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273916' 00:04:24.580 killing process with pid 273916 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@973 -- # kill 273916 00:04:24.580 04:59:06 rpc -- common/autotest_common.sh@978 -- # wait 273916 00:04:25.149 00:04:25.149 real 0m2.743s 00:04:25.149 user 0m3.424s 00:04:25.149 sys 0m0.883s 00:04:25.149 04:59:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.149 04:59:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.149 ************************************ 00:04:25.149 END TEST rpc 00:04:25.149 ************************************ 00:04:25.149 04:59:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.149 04:59:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.149 04:59:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.149 04:59:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.149 ************************************ 00:04:25.149 START TEST skip_rpc 00:04:25.149 ************************************ 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.149 * Looking for test storage... 00:04:25.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.149 04:59:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.149 --rc genhtml_branch_coverage=1 00:04:25.149 --rc genhtml_function_coverage=1 00:04:25.149 --rc genhtml_legend=1 00:04:25.149 --rc geninfo_all_blocks=1 00:04:25.149 --rc geninfo_unexecuted_blocks=1 00:04:25.149 00:04:25.149 ' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.149 --rc genhtml_branch_coverage=1 00:04:25.149 --rc genhtml_function_coverage=1 00:04:25.149 --rc genhtml_legend=1 00:04:25.149 --rc geninfo_all_blocks=1 00:04:25.149 --rc geninfo_unexecuted_blocks=1 00:04:25.149 00:04:25.149 ' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.149 --rc genhtml_branch_coverage=1 00:04:25.149 --rc genhtml_function_coverage=1 00:04:25.149 --rc genhtml_legend=1 00:04:25.149 --rc geninfo_all_blocks=1 00:04:25.149 --rc geninfo_unexecuted_blocks=1 00:04:25.149 00:04:25.149 ' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.149 --rc genhtml_branch_coverage=1 00:04:25.149 --rc genhtml_function_coverage=1 00:04:25.149 --rc genhtml_legend=1 00:04:25.149 --rc geninfo_all_blocks=1 00:04:25.149 --rc geninfo_unexecuted_blocks=1 00:04:25.149 00:04:25.149 ' 00:04:25.149 04:59:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.149 04:59:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.149 04:59:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.149 04:59:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.410 ************************************ 00:04:25.410 START TEST skip_rpc 00:04:25.410 ************************************ 00:04:25.410 04:59:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:25.410 04:59:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=274525 00:04:25.410 04:59:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.410 04:59:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.410 04:59:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.410 [2024-12-09 04:59:07.697077] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:25.410 [2024-12-09 04:59:07.697119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274525 ] 00:04:25.410 [2024-12-09 04:59:07.788734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.410 [2024-12-09 04:59:07.826906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 274525 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 274525 ']' 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 274525 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274525 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274525' 00:04:30.693 killing process with pid 274525 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 274525 00:04:30.693 04:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 274525 00:04:30.693 00:04:30.693 real 0m5.424s 00:04:30.693 user 0m5.160s 00:04:30.693 sys 0m0.316s 00:04:30.693 04:59:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.693 04:59:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.693 ************************************ 00:04:30.693 END TEST skip_rpc 00:04:30.693 ************************************ 00:04:30.693 04:59:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.693 04:59:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.693 04:59:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.693 04:59:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.693 ************************************ 00:04:30.693 START TEST skip_rpc_with_json 00:04:30.693 ************************************ 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=275528 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 275528 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 275528 ']' 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.693 04:59:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.953 [2024-12-09 04:59:13.208113] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:30.953 [2024-12-09 04:59:13.208159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275528 ] 00:04:30.953 [2024-12-09 04:59:13.298193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.953 [2024-12-09 04:59:13.339894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.892 [2024-12-09 04:59:14.034027] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.892 request: 00:04:31.892 { 00:04:31.892 "trtype": "tcp", 00:04:31.892 "method": "nvmf_get_transports", 00:04:31.892 "req_id": 1 00:04:31.892 } 00:04:31.892 Got JSON-RPC error response 00:04:31.892 response: 00:04:31.892 { 00:04:31.892 "code": -19, 00:04:31.892 "message": "No such device" 00:04:31.892 } 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.892 [2024-12-09 04:59:14.046132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.892 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.892 { 00:04:31.892 "subsystems": [ 00:04:31.892 { 00:04:31.892 "subsystem": "fsdev", 00:04:31.892 "config": [ 00:04:31.892 { 00:04:31.892 "method": "fsdev_set_opts", 00:04:31.892 "params": { 00:04:31.892 "fsdev_io_pool_size": 65535, 00:04:31.892 "fsdev_io_cache_size": 256 00:04:31.892 } 00:04:31.892 } 00:04:31.892 ] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "vfio_user_target", 00:04:31.892 "config": null 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "keyring", 00:04:31.892 "config": [] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "iobuf", 00:04:31.892 "config": [ 00:04:31.892 { 00:04:31.892 "method": "iobuf_set_options", 00:04:31.892 "params": { 00:04:31.892 "small_pool_count": 8192, 00:04:31.892 "large_pool_count": 1024, 00:04:31.892 "small_bufsize": 8192, 00:04:31.892 "large_bufsize": 135168, 00:04:31.892 "enable_numa": false 00:04:31.892 } 00:04:31.892 } 00:04:31.892 ] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "sock", 00:04:31.892 "config": [ 00:04:31.892 { 00:04:31.892 "method": "sock_set_default_impl", 00:04:31.892 "params": { 00:04:31.892 "impl_name": "posix" 00:04:31.892 } 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "method": "sock_impl_set_options", 00:04:31.892 "params": { 00:04:31.892 "impl_name": "ssl", 00:04:31.892 "recv_buf_size": 4096, 00:04:31.892 "send_buf_size": 4096, 00:04:31.892 "enable_recv_pipe": true, 00:04:31.892 "enable_quickack": false, 00:04:31.892 "enable_placement_id": 0, 00:04:31.892 "enable_zerocopy_send_server": true, 00:04:31.892 "enable_zerocopy_send_client": false, 00:04:31.892 "zerocopy_threshold": 0, 00:04:31.892 "tls_version": 0, 00:04:31.892 "enable_ktls": false 00:04:31.892 } 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "method": "sock_impl_set_options", 00:04:31.892 "params": { 00:04:31.892 "impl_name": "posix", 00:04:31.892 "recv_buf_size": 2097152, 00:04:31.892 "send_buf_size": 2097152, 00:04:31.892 "enable_recv_pipe": true, 00:04:31.892 "enable_quickack": false, 00:04:31.892 "enable_placement_id": 0, 00:04:31.892 "enable_zerocopy_send_server": true, 00:04:31.892 "enable_zerocopy_send_client": false, 00:04:31.892 "zerocopy_threshold": 0, 00:04:31.892 "tls_version": 0, 00:04:31.892 "enable_ktls": false 00:04:31.892 } 00:04:31.892 } 00:04:31.892 ] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "vmd", 00:04:31.892 "config": [] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "accel", 00:04:31.892 "config": [ 00:04:31.892 { 00:04:31.892 "method": "accel_set_options", 00:04:31.892 "params": { 00:04:31.892 "small_cache_size": 128, 00:04:31.892 "large_cache_size": 16, 00:04:31.892 "task_count": 2048, 00:04:31.892 "sequence_count": 2048, 00:04:31.892 "buf_count": 2048 00:04:31.892 } 00:04:31.892 } 00:04:31.892 ] 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "subsystem": "bdev", 00:04:31.892 "config": [ 00:04:31.892 { 00:04:31.892 "method": "bdev_set_options", 00:04:31.892 "params": { 00:04:31.892 "bdev_io_pool_size": 65535, 00:04:31.892 "bdev_io_cache_size": 256, 00:04:31.892 "bdev_auto_examine": true, 00:04:31.892 "iobuf_small_cache_size": 128, 00:04:31.892 "iobuf_large_cache_size": 16 00:04:31.892 } 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "method": "bdev_raid_set_options", 00:04:31.892 "params": { 00:04:31.892 "process_window_size_kb": 1024, 00:04:31.892 "process_max_bandwidth_mb_sec": 0 00:04:31.892 } 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "method": "bdev_iscsi_set_options", 00:04:31.892 "params": { 00:04:31.892 "timeout_sec": 30 00:04:31.892 } 00:04:31.892 }, 00:04:31.892 { 00:04:31.892 "method": "bdev_nvme_set_options", 00:04:31.893 "params": { 00:04:31.893 "action_on_timeout": "none", 00:04:31.893 "timeout_us": 0, 00:04:31.893 "timeout_admin_us": 0, 00:04:31.893 "keep_alive_timeout_ms": 10000, 00:04:31.893 "arbitration_burst": 0, 00:04:31.893 "low_priority_weight": 0, 00:04:31.893 "medium_priority_weight": 0, 00:04:31.893 "high_priority_weight": 0, 00:04:31.893 "nvme_adminq_poll_period_us": 10000, 00:04:31.893 "nvme_ioq_poll_period_us": 0, 00:04:31.893 "io_queue_requests": 0, 00:04:31.893 "delay_cmd_submit": true, 00:04:31.893 "transport_retry_count": 4, 00:04:31.893 "bdev_retry_count": 3, 00:04:31.893 "transport_ack_timeout": 0, 00:04:31.893 "ctrlr_loss_timeout_sec": 0, 00:04:31.893 "reconnect_delay_sec": 0, 00:04:31.893 "fast_io_fail_timeout_sec": 0, 00:04:31.893 "disable_auto_failback": false, 00:04:31.893 "generate_uuids": false, 00:04:31.893 "transport_tos": 0, 00:04:31.893 "nvme_error_stat": false, 00:04:31.893 "rdma_srq_size": 0, 00:04:31.893 "io_path_stat": false, 00:04:31.893 "allow_accel_sequence": false, 00:04:31.893 "rdma_max_cq_size": 0, 00:04:31.893 "rdma_cm_event_timeout_ms": 0, 00:04:31.893 "dhchap_digests": [ 00:04:31.893 "sha256", 00:04:31.893 "sha384", 00:04:31.893 "sha512" 00:04:31.893 ], 00:04:31.893 "dhchap_dhgroups": [ 00:04:31.893 "null", 00:04:31.893 "ffdhe2048", 00:04:31.893 "ffdhe3072", 00:04:31.893 "ffdhe4096", 00:04:31.893 "ffdhe6144", 00:04:31.893 "ffdhe8192" 00:04:31.893 ] 00:04:31.893 } 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "method": "bdev_nvme_set_hotplug", 00:04:31.893 "params": { 00:04:31.893 "period_us": 100000, 00:04:31.893 "enable": false 00:04:31.893 } 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "method": "bdev_wait_for_examine" 00:04:31.893 } 00:04:31.893 ] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "scsi", 00:04:31.893 "config": null 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "scheduler", 00:04:31.893 "config": [ 00:04:31.893 { 00:04:31.893 "method": "framework_set_scheduler", 00:04:31.893 "params": { 00:04:31.893 "name": "static" 00:04:31.893 } 00:04:31.893 } 00:04:31.893 ] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "vhost_scsi", 00:04:31.893 "config": [] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "vhost_blk", 00:04:31.893 "config": [] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "ublk", 00:04:31.893 "config": [] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "nbd", 00:04:31.893 "config": [] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "nvmf", 00:04:31.893 "config": [ 00:04:31.893 { 00:04:31.893 "method": "nvmf_set_config", 00:04:31.893 "params": { 00:04:31.893 "discovery_filter": "match_any", 00:04:31.893 "admin_cmd_passthru": { 00:04:31.893 "identify_ctrlr": false 00:04:31.893 }, 00:04:31.893 "dhchap_digests": [ 00:04:31.893 "sha256", 00:04:31.893 "sha384", 00:04:31.893 "sha512" 00:04:31.893 ], 00:04:31.893 "dhchap_dhgroups": [ 00:04:31.893 "null", 00:04:31.893 "ffdhe2048", 00:04:31.893 "ffdhe3072", 00:04:31.893 "ffdhe4096", 00:04:31.893 "ffdhe6144", 00:04:31.893 "ffdhe8192" 00:04:31.893 ] 00:04:31.893 } 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "method": "nvmf_set_max_subsystems", 00:04:31.893 "params": { 00:04:31.893 "max_subsystems": 1024 00:04:31.893 } 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "method": "nvmf_set_crdt", 00:04:31.893 "params": { 00:04:31.893 "crdt1": 0, 00:04:31.893 "crdt2": 0, 00:04:31.893 "crdt3": 0 00:04:31.893 } 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "method": "nvmf_create_transport", 00:04:31.893 "params": { 00:04:31.893 "trtype": "TCP", 00:04:31.893 "max_queue_depth": 128, 00:04:31.893 "max_io_qpairs_per_ctrlr": 127, 00:04:31.893 "in_capsule_data_size": 4096, 00:04:31.893 "max_io_size": 131072, 00:04:31.893 "io_unit_size": 131072, 00:04:31.893 "max_aq_depth": 128, 00:04:31.893 "num_shared_buffers": 511, 00:04:31.893 "buf_cache_size": 4294967295, 00:04:31.893 "dif_insert_or_strip": false, 00:04:31.893 "zcopy": false, 00:04:31.893 "c2h_success": true, 00:04:31.893 "sock_priority": 0, 00:04:31.893 "abort_timeout_sec": 1, 00:04:31.893 "ack_timeout": 0, 00:04:31.893 "data_wr_pool_size": 0 00:04:31.893 } 00:04:31.893 } 00:04:31.893 ] 00:04:31.893 }, 00:04:31.893 { 00:04:31.893 "subsystem": "iscsi", 00:04:31.893 "config": [ 00:04:31.893 { 00:04:31.893 "method": "iscsi_set_options", 00:04:31.893 "params": { 00:04:31.893 "node_base": "iqn.2016-06.io.spdk", 00:04:31.893 "max_sessions": 128, 00:04:31.893 "max_connections_per_session": 2, 00:04:31.893 "max_queue_depth": 64, 00:04:31.893 "default_time2wait": 2, 00:04:31.893 "default_time2retain": 20, 00:04:31.893 "first_burst_length": 8192, 00:04:31.893 "immediate_data": true, 00:04:31.893 "allow_duplicated_isid": false, 00:04:31.893 "error_recovery_level": 0, 00:04:31.893 "nop_timeout": 60, 00:04:31.893 "nop_in_interval": 30, 00:04:31.893 "disable_chap": false, 00:04:31.893 "require_chap": false, 00:04:31.893 "mutual_chap": false, 00:04:31.893 "chap_group": 0, 00:04:31.893 "max_large_datain_per_connection": 64, 00:04:31.893 "max_r2t_per_connection": 4, 00:04:31.893 "pdu_pool_size": 36864, 00:04:31.893 "immediate_data_pool_size": 16384, 00:04:31.893 "data_out_pool_size": 2048 00:04:31.893 } 00:04:31.893 } 00:04:31.893 ] 00:04:31.893 } 00:04:31.893 ] 00:04:31.893 } 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 275528 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 275528 ']' 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 275528 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275528 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.893 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275528' 00:04:31.893 killing process with pid 275528 00:04:31.894 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 275528 00:04:31.894 04:59:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 275528 00:04:32.461 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=275800 00:04:32.461 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.461 04:59:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 275800 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 275800 ']' 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 275800 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275800 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275800' 00:04:37.736 killing process with pid 275800 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 275800 00:04:37.736 04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 275800 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.736 00:04:37.736 real 0m6.898s 00:04:37.736 user 0m6.705s 00:04:37.736 sys 0m0.702s 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.736 ************************************ 00:04:37.736 END TEST skip_rpc_with_json 00:04:37.736 ************************************ 00:04:37.736 04:59:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.736 04:59:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.736 04:59:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.736 04:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.736 ************************************ 00:04:37.736 START TEST skip_rpc_with_delay 00:04:37.736 ************************************ 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.736 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.736 [2024-12-09 04:59:20.194049] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.996 00:04:37.996 real 0m0.077s 00:04:37.996 user 0m0.041s 00:04:37.996 sys 0m0.035s 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.996 04:59:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.996 ************************************ 00:04:37.996 END TEST skip_rpc_with_delay 00:04:37.996 ************************************ 00:04:37.996 04:59:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.996 04:59:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.996 04:59:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.996 04:59:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.996 04:59:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.996 04:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.996 ************************************ 00:04:37.996 START TEST exit_on_failed_rpc_init 00:04:37.996 ************************************ 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=276915 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 276915 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 276915 ']' 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.996 04:59:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.996 [2024-12-09 04:59:20.351494] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:37.996 [2024-12-09 04:59:20.351544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276915 ] 00:04:37.996 [2024-12-09 04:59:20.445718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.256 [2024-12-09 04:59:20.485482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:38.825 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.825 [2024-12-09 04:59:21.229732] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:38.825 [2024-12-09 04:59:21.229784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276933 ] 00:04:39.084 [2024-12-09 04:59:21.321002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.084 [2024-12-09 04:59:21.359711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.084 [2024-12-09 04:59:21.359768] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.084 [2024-12-09 04:59:21.359779] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.084 [2024-12-09 04:59:21.359787] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 276915 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 276915 ']' 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 276915 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276915 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276915' 00:04:39.084 killing process with pid 276915 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 276915 00:04:39.084 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 276915 00:04:39.652 00:04:39.652 real 0m1.552s 00:04:39.652 user 0m1.755s 00:04:39.652 sys 0m0.483s 00:04:39.652 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.652 04:59:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.652 ************************************ 00:04:39.652 END TEST exit_on_failed_rpc_init 00:04:39.652 ************************************ 00:04:39.652 04:59:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.652 00:04:39.652 real 0m14.487s 00:04:39.652 user 0m13.894s 00:04:39.652 sys 0m1.884s 00:04:39.652 04:59:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.652 04:59:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.652 ************************************ 00:04:39.652 END TEST skip_rpc 00:04:39.652 ************************************ 00:04:39.652 04:59:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.652 04:59:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.653 04:59:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.653 04:59:21 -- common/autotest_common.sh@10 -- # set +x 00:04:39.653 ************************************ 00:04:39.653 START TEST rpc_client 00:04:39.653 ************************************ 00:04:39.653 04:59:21 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.653 * Looking for test storage... 00:04:39.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:39.653 04:59:22 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.653 04:59:22 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.653 04:59:22 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.913 04:59:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.913 --rc genhtml_branch_coverage=1 00:04:39.913 --rc genhtml_function_coverage=1 00:04:39.913 --rc genhtml_legend=1 00:04:39.913 --rc geninfo_all_blocks=1 00:04:39.913 --rc geninfo_unexecuted_blocks=1 00:04:39.913 00:04:39.913 ' 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.913 --rc genhtml_branch_coverage=1 00:04:39.913 --rc genhtml_function_coverage=1 00:04:39.913 --rc genhtml_legend=1 00:04:39.913 --rc geninfo_all_blocks=1 00:04:39.913 --rc geninfo_unexecuted_blocks=1 00:04:39.913 00:04:39.913 ' 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.913 --rc genhtml_branch_coverage=1 00:04:39.913 --rc genhtml_function_coverage=1 00:04:39.913 --rc genhtml_legend=1 00:04:39.913 --rc geninfo_all_blocks=1 00:04:39.913 --rc geninfo_unexecuted_blocks=1 00:04:39.913 00:04:39.913 ' 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.913 --rc genhtml_branch_coverage=1 00:04:39.913 --rc genhtml_function_coverage=1 00:04:39.913 --rc genhtml_legend=1 00:04:39.913 --rc geninfo_all_blocks=1 00:04:39.913 --rc geninfo_unexecuted_blocks=1 00:04:39.913 00:04:39.913 ' 00:04:39.913 04:59:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:39.913 OK 00:04:39.913 04:59:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:39.913 00:04:39.913 real 0m0.224s 00:04:39.913 user 0m0.121s 00:04:39.913 sys 0m0.120s 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.913 04:59:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 ************************************ 00:04:39.913 END TEST rpc_client 00:04:39.913 ************************************ 00:04:39.913 04:59:22 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.913 04:59:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.913 04:59:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.914 04:59:22 -- common/autotest_common.sh@10 -- # set +x 00:04:39.914 ************************************ 00:04:39.914 START TEST json_config 00:04:39.914 ************************************ 00:04:39.914 04:59:22 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.914 04:59:22 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.914 04:59:22 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.914 04:59:22 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.174 04:59:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.174 04:59:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.174 04:59:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.174 04:59:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.174 04:59:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.174 04:59:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.174 04:59:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.174 04:59:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.174 04:59:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.174 04:59:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.174 04:59:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.174 04:59:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.174 04:59:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:40.174 04:59:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:40.174 04:59:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.174 04:59:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.174 04:59:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:40.174 04:59:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:40.174 04:59:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.174 04:59:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:40.175 04:59:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.175 04:59:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:40.175 04:59:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:40.175 04:59:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.175 04:59:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:40.175 04:59:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.175 04:59:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.175 04:59:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.175 04:59:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.175 --rc genhtml_branch_coverage=1 00:04:40.175 --rc genhtml_function_coverage=1 00:04:40.175 --rc genhtml_legend=1 00:04:40.175 --rc geninfo_all_blocks=1 00:04:40.175 --rc geninfo_unexecuted_blocks=1 00:04:40.175 00:04:40.175 ' 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.175 --rc genhtml_branch_coverage=1 00:04:40.175 --rc genhtml_function_coverage=1 00:04:40.175 --rc genhtml_legend=1 00:04:40.175 --rc geninfo_all_blocks=1 00:04:40.175 --rc geninfo_unexecuted_blocks=1 00:04:40.175 00:04:40.175 ' 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.175 --rc genhtml_branch_coverage=1 00:04:40.175 --rc genhtml_function_coverage=1 00:04:40.175 --rc genhtml_legend=1 00:04:40.175 --rc geninfo_all_blocks=1 00:04:40.175 --rc geninfo_unexecuted_blocks=1 00:04:40.175 00:04:40.175 ' 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.175 --rc genhtml_branch_coverage=1 00:04:40.175 --rc genhtml_function_coverage=1 00:04:40.175 --rc genhtml_legend=1 00:04:40.175 --rc geninfo_all_blocks=1 00:04:40.175 --rc geninfo_unexecuted_blocks=1 00:04:40.175 00:04:40.175 ' 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:40.175 04:59:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.175 04:59:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.175 04:59:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.175 04:59:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.175 04:59:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.175 04:59:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.175 04:59:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.175 04:59:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.175 04:59:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@51 -- # : 0 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.175 04:59:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:40.175 INFO: JSON configuration test init 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.175 04:59:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.175 04:59:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.175 04:59:22 json_config -- json_config/common.sh@10 -- # shift 00:04:40.175 04:59:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.175 04:59:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.175 04:59:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.175 04:59:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.175 04:59:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.175 04:59:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=277325 00:04:40.175 04:59:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.175 Waiting for target to run... 00:04:40.175 04:59:22 json_config -- json_config/common.sh@25 -- # waitforlisten 277325 /var/tmp/spdk_tgt.sock 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 277325 ']' 00:04:40.175 04:59:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.175 04:59:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.176 04:59:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.176 04:59:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.176 [2024-12-09 04:59:22.555695] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:40.176 [2024-12-09 04:59:22.555747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277325 ] 00:04:40.435 [2024-12-09 04:59:22.863635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.435 [2024-12-09 04:59:22.895734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:41.003 04:59:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.003 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.003 04:59:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.003 04:59:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:41.003 04:59:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:44.293 04:59:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.293 04:59:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:44.293 04:59:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@54 -- # sort 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:44.293 04:59:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:44.293 04:59:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.293 04:59:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:44.552 04:59:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.552 04:59:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.552 04:59:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.552 MallocForNvmf0 00:04:44.552 04:59:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.552 04:59:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.812 MallocForNvmf1 00:04:44.812 04:59:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.812 04:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.072 [2024-12-09 04:59:27.332868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.072 04:59:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.072 04:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.332 04:59:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.332 04:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.332 04:59:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.332 04:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.591 04:59:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.591 04:59:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.851 [2024-12-09 04:59:28.143375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.851 04:59:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:45.851 04:59:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.851 04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.851 04:59:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:45.851 04:59:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.851 04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.851 04:59:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:45.851 04:59:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.851 04:59:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.111 MallocBdevForConfigChangeCheck 00:04:46.111 04:59:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:46.111 04:59:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.111 04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.111 04:59:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:46.111 04:59:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.370 04:59:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:46.370 INFO: shutting down applications... 00:04:46.370 04:59:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:46.370 04:59:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:46.370 04:59:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:46.370 04:59:28 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.903 Calling clear_iscsi_subsystem 00:04:48.903 Calling clear_nvmf_subsystem 00:04:48.903 Calling clear_nbd_subsystem 00:04:48.903 Calling clear_ublk_subsystem 00:04:48.903 Calling clear_vhost_blk_subsystem 00:04:48.903 Calling clear_vhost_scsi_subsystem 00:04:48.903 Calling clear_bdev_subsystem 00:04:48.903 04:59:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@352 -- # break 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:48.903 04:59:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:48.903 04:59:31 json_config -- json_config/common.sh@31 -- # local app=target 00:04:48.903 04:59:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.903 04:59:31 json_config -- json_config/common.sh@35 -- # [[ -n 277325 ]] 00:04:48.903 04:59:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 277325 00:04:48.903 04:59:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.903 04:59:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.903 04:59:31 json_config -- json_config/common.sh@41 -- # kill -0 277325 00:04:48.903 04:59:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.471 04:59:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.471 04:59:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.471 04:59:31 json_config -- json_config/common.sh@41 -- # kill -0 277325 00:04:49.471 04:59:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.471 04:59:31 json_config -- json_config/common.sh@43 -- # break 00:04:49.471 04:59:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.471 04:59:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.471 SPDK target shutdown done 00:04:49.471 04:59:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:49.471 INFO: relaunching applications... 00:04:49.471 04:59:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.471 04:59:31 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.471 04:59:31 json_config -- json_config/common.sh@10 -- # shift 00:04:49.471 04:59:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.471 04:59:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.471 04:59:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.471 04:59:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.471 04:59:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.471 04:59:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=279057 00:04:49.471 04:59:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.471 Waiting for target to run... 00:04:49.471 04:59:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.471 04:59:31 json_config -- json_config/common.sh@25 -- # waitforlisten 279057 /var/tmp/spdk_tgt.sock 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 279057 ']' 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.471 04:59:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.471 [2024-12-09 04:59:31.930889] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:49.471 [2024-12-09 04:59:31.930944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279057 ] 00:04:50.038 [2024-12-09 04:59:32.390020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.038 [2024-12-09 04:59:32.437368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.330 [2024-12-09 04:59:35.486641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.330 [2024-12-09 04:59:35.518994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.899 04:59:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.899 04:59:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:53.899 04:59:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.899 00:04:53.899 04:59:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:53.899 04:59:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:53.900 INFO: Checking if target configuration is the same... 00:04:53.900 04:59:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:53.900 04:59:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.900 04:59:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:53.900 + '[' 2 -ne 2 ']' 00:04:53.900 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:53.900 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:53.900 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:53.900 +++ basename /dev/fd/62 00:04:53.900 ++ mktemp /tmp/62.XXX 00:04:53.900 + tmp_file_1=/tmp/62.fVu 00:04:53.900 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.900 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:53.900 + tmp_file_2=/tmp/spdk_tgt_config.json.8Zj 00:04:53.900 + ret=0 00:04:53.900 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.159 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.159 + diff -u /tmp/62.fVu /tmp/spdk_tgt_config.json.8Zj 00:04:54.159 + echo 'INFO: JSON config files are the same' 00:04:54.159 INFO: JSON config files are the same 00:04:54.159 + rm /tmp/62.fVu /tmp/spdk_tgt_config.json.8Zj 00:04:54.159 + exit 0 00:04:54.159 04:59:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:54.159 04:59:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:54.159 INFO: changing configuration and checking if this can be detected... 00:04:54.159 04:59:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.159 04:59:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:54.419 04:59:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.419 04:59:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:54.419 04:59:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.419 + '[' 2 -ne 2 ']' 00:04:54.419 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:54.419 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:54.419 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:54.419 +++ basename /dev/fd/62 00:04:54.419 ++ mktemp /tmp/62.XXX 00:04:54.419 + tmp_file_1=/tmp/62.v6R 00:04:54.419 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.419 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:54.419 + tmp_file_2=/tmp/spdk_tgt_config.json.cdS 00:04:54.419 + ret=0 00:04:54.419 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.680 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:54.680 + diff -u /tmp/62.v6R /tmp/spdk_tgt_config.json.cdS 00:04:54.680 + ret=1 00:04:54.680 + echo '=== Start of file: /tmp/62.v6R ===' 00:04:54.680 + cat /tmp/62.v6R 00:04:54.680 + echo '=== End of file: /tmp/62.v6R ===' 00:04:54.680 + echo '' 00:04:54.680 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cdS ===' 00:04:54.680 + cat /tmp/spdk_tgt_config.json.cdS 00:04:54.941 + echo '=== End of file: /tmp/spdk_tgt_config.json.cdS ===' 00:04:54.941 + echo '' 00:04:54.941 + rm /tmp/62.v6R /tmp/spdk_tgt_config.json.cdS 00:04:54.941 + exit 1 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:54.941 INFO: configuration change detected. 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 279057 ]] 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.941 04:59:37 json_config -- json_config/json_config.sh@330 -- # killprocess 279057 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 279057 ']' 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@958 -- # kill -0 279057 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@959 -- # uname 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279057 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279057' 00:04:54.941 killing process with pid 279057 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@973 -- # kill 279057 00:04:54.941 04:59:37 json_config -- common/autotest_common.sh@978 -- # wait 279057 00:04:57.477 04:59:39 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.477 04:59:39 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:57.477 04:59:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.477 04:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.477 04:59:39 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:57.477 04:59:39 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:57.477 INFO: Success 00:04:57.477 00:04:57.477 real 0m17.179s 00:04:57.477 user 0m17.682s 00:04:57.477 sys 0m2.831s 00:04:57.477 04:59:39 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.477 04:59:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.477 ************************************ 00:04:57.477 END TEST json_config 00:04:57.477 ************************************ 00:04:57.477 04:59:39 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:57.477 04:59:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.478 04:59:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.478 04:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:57.478 ************************************ 00:04:57.478 START TEST json_config_extra_key 00:04:57.478 ************************************ 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.478 --rc genhtml_branch_coverage=1 00:04:57.478 --rc genhtml_function_coverage=1 00:04:57.478 --rc genhtml_legend=1 00:04:57.478 --rc geninfo_all_blocks=1 00:04:57.478 --rc geninfo_unexecuted_blocks=1 00:04:57.478 00:04:57.478 ' 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.478 --rc genhtml_branch_coverage=1 00:04:57.478 --rc genhtml_function_coverage=1 00:04:57.478 --rc genhtml_legend=1 00:04:57.478 --rc geninfo_all_blocks=1 00:04:57.478 --rc geninfo_unexecuted_blocks=1 00:04:57.478 00:04:57.478 ' 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.478 --rc genhtml_branch_coverage=1 00:04:57.478 --rc genhtml_function_coverage=1 00:04:57.478 --rc genhtml_legend=1 00:04:57.478 --rc geninfo_all_blocks=1 00:04:57.478 --rc geninfo_unexecuted_blocks=1 00:04:57.478 00:04:57.478 ' 00:04:57.478 04:59:39 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.478 --rc genhtml_branch_coverage=1 00:04:57.478 --rc genhtml_function_coverage=1 00:04:57.478 --rc genhtml_legend=1 00:04:57.478 --rc geninfo_all_blocks=1 00:04:57.478 --rc geninfo_unexecuted_blocks=1 00:04:57.478 00:04:57.478 ' 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.478 04:59:39 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.478 04:59:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.478 04:59:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.478 04:59:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.478 04:59:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:57.478 04:59:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.478 04:59:39 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:57.478 INFO: launching applications... 00:04:57.478 04:59:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=280520 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.479 Waiting for target to run... 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 280520 /var/tmp/spdk_tgt.sock 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 280520 ']' 00:04:57.479 04:59:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.479 04:59:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:57.479 [2024-12-09 04:59:39.807150] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:57.479 [2024-12-09 04:59:39.807201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280520 ] 00:04:58.046 [2024-12-09 04:59:40.278299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.046 [2024-12-09 04:59:40.328221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.306 04:59:40 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.306 04:59:40 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.306 00:04:58.306 04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.306 INFO: shutting down applications... 00:04:58.306 04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 280520 ]] 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 280520 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 280520 00:04:58.306 04:59:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 280520 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.874 04:59:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.874 SPDK target shutdown done 00:04:58.874 04:59:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:58.874 Success 00:04:58.874 00:04:58.874 real 0m1.603s 00:04:58.874 user 0m1.219s 00:04:58.874 sys 0m0.605s 00:04:58.874 04:59:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.874 04:59:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.874 ************************************ 00:04:58.874 END TEST json_config_extra_key 00:04:58.874 ************************************ 00:04:58.874 04:59:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.874 04:59:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.874 04:59:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.874 04:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.874 ************************************ 00:04:58.874 START TEST alias_rpc 00:04:58.874 ************************************ 00:04:58.874 04:59:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.874 * Looking for test storage... 00:04:58.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:58.874 04:59:41 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.874 04:59:41 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.134 04:59:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 04:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.134 04:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=280893 00:04:59.134 04:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 280893 00:04:59.134 04:59:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 280893 ']' 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.134 04:59:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.134 [2024-12-09 04:59:41.485199] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:04:59.135 [2024-12-09 04:59:41.485274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280893 ] 00:04:59.135 [2024-12-09 04:59:41.575804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.395 [2024-12-09 04:59:41.618135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.965 04:59:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.965 04:59:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.965 04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.225 04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 280893 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 280893 ']' 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 280893 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280893 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280893' 00:05:00.225 killing process with pid 280893 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 280893 00:05:00.225 04:59:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 280893 00:05:00.485 00:05:00.485 real 0m1.679s 00:05:00.485 user 0m1.771s 00:05:00.485 sys 0m0.512s 00:05:00.485 04:59:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.485 04:59:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.485 ************************************ 00:05:00.485 END TEST alias_rpc 00:05:00.485 ************************************ 00:05:00.485 04:59:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.485 04:59:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.485 04:59:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.485 04:59:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.485 04:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:00.770 ************************************ 00:05:00.770 START TEST spdkcli_tcp 00:05:00.770 ************************************ 00:05:00.770 04:59:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:00.770 * Looking for test storage... 00:05:00.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.770 04:59:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.770 --rc genhtml_branch_coverage=1 00:05:00.770 --rc genhtml_function_coverage=1 00:05:00.770 --rc genhtml_legend=1 00:05:00.770 --rc geninfo_all_blocks=1 00:05:00.770 --rc geninfo_unexecuted_blocks=1 00:05:00.770 00:05:00.770 ' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.770 --rc genhtml_branch_coverage=1 00:05:00.770 --rc genhtml_function_coverage=1 00:05:00.770 --rc genhtml_legend=1 00:05:00.770 --rc geninfo_all_blocks=1 00:05:00.770 --rc geninfo_unexecuted_blocks=1 00:05:00.770 00:05:00.770 ' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.770 --rc genhtml_branch_coverage=1 00:05:00.770 --rc genhtml_function_coverage=1 00:05:00.770 --rc genhtml_legend=1 00:05:00.770 --rc geninfo_all_blocks=1 00:05:00.770 --rc geninfo_unexecuted_blocks=1 00:05:00.770 00:05:00.770 ' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.770 --rc genhtml_branch_coverage=1 00:05:00.770 --rc genhtml_function_coverage=1 00:05:00.770 --rc genhtml_legend=1 00:05:00.770 --rc geninfo_all_blocks=1 00:05:00.770 --rc geninfo_unexecuted_blocks=1 00:05:00.770 00:05:00.770 ' 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=281333 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 281333 00:05:00.770 04:59:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 281333 ']' 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.770 04:59:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.029 [2024-12-09 04:59:43.244752] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:01.029 [2024-12-09 04:59:43.244808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281333 ] 00:05:01.029 [2024-12-09 04:59:43.335886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.029 [2024-12-09 04:59:43.379390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.029 [2024-12-09 04:59:43.379391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.970 04:59:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.970 04:59:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.970 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=281448 00:05:01.970 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.970 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.970 [ 00:05:01.970 "bdev_malloc_delete", 00:05:01.970 "bdev_malloc_create", 00:05:01.970 "bdev_null_resize", 00:05:01.970 "bdev_null_delete", 00:05:01.970 "bdev_null_create", 00:05:01.970 "bdev_nvme_cuse_unregister", 00:05:01.970 "bdev_nvme_cuse_register", 00:05:01.970 "bdev_opal_new_user", 00:05:01.970 "bdev_opal_set_lock_state", 00:05:01.970 "bdev_opal_delete", 00:05:01.970 "bdev_opal_get_info", 00:05:01.970 "bdev_opal_create", 00:05:01.970 "bdev_nvme_opal_revert", 00:05:01.970 "bdev_nvme_opal_init", 00:05:01.970 "bdev_nvme_send_cmd", 00:05:01.970 "bdev_nvme_set_keys", 00:05:01.970 "bdev_nvme_get_path_iostat", 00:05:01.970 "bdev_nvme_get_mdns_discovery_info", 00:05:01.970 "bdev_nvme_stop_mdns_discovery", 00:05:01.970 "bdev_nvme_start_mdns_discovery", 00:05:01.970 "bdev_nvme_set_multipath_policy", 00:05:01.970 "bdev_nvme_set_preferred_path", 00:05:01.970 "bdev_nvme_get_io_paths", 00:05:01.970 "bdev_nvme_remove_error_injection", 00:05:01.970 "bdev_nvme_add_error_injection", 00:05:01.970 "bdev_nvme_get_discovery_info", 00:05:01.970 "bdev_nvme_stop_discovery", 00:05:01.970 "bdev_nvme_start_discovery", 00:05:01.970 "bdev_nvme_get_controller_health_info", 00:05:01.970 "bdev_nvme_disable_controller", 00:05:01.970 "bdev_nvme_enable_controller", 00:05:01.970 "bdev_nvme_reset_controller", 00:05:01.970 "bdev_nvme_get_transport_statistics", 00:05:01.970 "bdev_nvme_apply_firmware", 00:05:01.970 "bdev_nvme_detach_controller", 00:05:01.970 "bdev_nvme_get_controllers", 00:05:01.970 "bdev_nvme_attach_controller", 00:05:01.970 "bdev_nvme_set_hotplug", 00:05:01.970 "bdev_nvme_set_options", 00:05:01.970 "bdev_passthru_delete", 00:05:01.970 "bdev_passthru_create", 00:05:01.970 "bdev_lvol_set_parent_bdev", 00:05:01.970 "bdev_lvol_set_parent", 00:05:01.970 "bdev_lvol_check_shallow_copy", 00:05:01.970 "bdev_lvol_start_shallow_copy", 00:05:01.970 "bdev_lvol_grow_lvstore", 00:05:01.970 "bdev_lvol_get_lvols", 00:05:01.970 "bdev_lvol_get_lvstores", 00:05:01.970 "bdev_lvol_delete", 00:05:01.970 "bdev_lvol_set_read_only", 00:05:01.970 "bdev_lvol_resize", 00:05:01.970 "bdev_lvol_decouple_parent", 00:05:01.970 "bdev_lvol_inflate", 00:05:01.970 "bdev_lvol_rename", 00:05:01.970 "bdev_lvol_clone_bdev", 00:05:01.970 "bdev_lvol_clone", 00:05:01.970 "bdev_lvol_snapshot", 00:05:01.970 "bdev_lvol_create", 00:05:01.970 "bdev_lvol_delete_lvstore", 00:05:01.970 "bdev_lvol_rename_lvstore", 00:05:01.970 "bdev_lvol_create_lvstore", 00:05:01.970 "bdev_raid_set_options", 00:05:01.970 "bdev_raid_remove_base_bdev", 00:05:01.970 "bdev_raid_add_base_bdev", 00:05:01.970 "bdev_raid_delete", 00:05:01.970 "bdev_raid_create", 00:05:01.970 "bdev_raid_get_bdevs", 00:05:01.970 "bdev_error_inject_error", 00:05:01.970 "bdev_error_delete", 00:05:01.970 "bdev_error_create", 00:05:01.970 "bdev_split_delete", 00:05:01.970 "bdev_split_create", 00:05:01.970 "bdev_delay_delete", 00:05:01.970 "bdev_delay_create", 00:05:01.970 "bdev_delay_update_latency", 00:05:01.970 "bdev_zone_block_delete", 00:05:01.970 "bdev_zone_block_create", 00:05:01.970 "blobfs_create", 00:05:01.970 "blobfs_detect", 00:05:01.970 "blobfs_set_cache_size", 00:05:01.970 "bdev_aio_delete", 00:05:01.970 "bdev_aio_rescan", 00:05:01.970 "bdev_aio_create", 00:05:01.970 "bdev_ftl_set_property", 00:05:01.970 "bdev_ftl_get_properties", 00:05:01.970 "bdev_ftl_get_stats", 00:05:01.970 "bdev_ftl_unmap", 00:05:01.970 "bdev_ftl_unload", 00:05:01.970 "bdev_ftl_delete", 00:05:01.970 "bdev_ftl_load", 00:05:01.970 "bdev_ftl_create", 00:05:01.970 "bdev_virtio_attach_controller", 00:05:01.970 "bdev_virtio_scsi_get_devices", 00:05:01.970 "bdev_virtio_detach_controller", 00:05:01.970 "bdev_virtio_blk_set_hotplug", 00:05:01.970 "bdev_iscsi_delete", 00:05:01.970 "bdev_iscsi_create", 00:05:01.970 "bdev_iscsi_set_options", 00:05:01.970 "accel_error_inject_error", 00:05:01.970 "ioat_scan_accel_module", 00:05:01.970 "dsa_scan_accel_module", 00:05:01.970 "iaa_scan_accel_module", 00:05:01.970 "vfu_virtio_create_fs_endpoint", 00:05:01.970 "vfu_virtio_create_scsi_endpoint", 00:05:01.970 "vfu_virtio_scsi_remove_target", 00:05:01.970 "vfu_virtio_scsi_add_target", 00:05:01.970 "vfu_virtio_create_blk_endpoint", 00:05:01.970 "vfu_virtio_delete_endpoint", 00:05:01.970 "keyring_file_remove_key", 00:05:01.970 "keyring_file_add_key", 00:05:01.970 "keyring_linux_set_options", 00:05:01.970 "fsdev_aio_delete", 00:05:01.970 "fsdev_aio_create", 00:05:01.970 "iscsi_get_histogram", 00:05:01.970 "iscsi_enable_histogram", 00:05:01.970 "iscsi_set_options", 00:05:01.970 "iscsi_get_auth_groups", 00:05:01.970 "iscsi_auth_group_remove_secret", 00:05:01.970 "iscsi_auth_group_add_secret", 00:05:01.970 "iscsi_delete_auth_group", 00:05:01.970 "iscsi_create_auth_group", 00:05:01.970 "iscsi_set_discovery_auth", 00:05:01.970 "iscsi_get_options", 00:05:01.970 "iscsi_target_node_request_logout", 00:05:01.970 "iscsi_target_node_set_redirect", 00:05:01.970 "iscsi_target_node_set_auth", 00:05:01.970 "iscsi_target_node_add_lun", 00:05:01.970 "iscsi_get_stats", 00:05:01.970 "iscsi_get_connections", 00:05:01.970 "iscsi_portal_group_set_auth", 00:05:01.970 "iscsi_start_portal_group", 00:05:01.970 "iscsi_delete_portal_group", 00:05:01.970 "iscsi_create_portal_group", 00:05:01.970 "iscsi_get_portal_groups", 00:05:01.970 "iscsi_delete_target_node", 00:05:01.970 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.970 "iscsi_target_node_add_pg_ig_maps", 00:05:01.970 "iscsi_create_target_node", 00:05:01.970 "iscsi_get_target_nodes", 00:05:01.970 "iscsi_delete_initiator_group", 00:05:01.970 "iscsi_initiator_group_remove_initiators", 00:05:01.970 "iscsi_initiator_group_add_initiators", 00:05:01.970 "iscsi_create_initiator_group", 00:05:01.970 "iscsi_get_initiator_groups", 00:05:01.970 "nvmf_set_crdt", 00:05:01.970 "nvmf_set_config", 00:05:01.970 "nvmf_set_max_subsystems", 00:05:01.970 "nvmf_stop_mdns_prr", 00:05:01.970 "nvmf_publish_mdns_prr", 00:05:01.970 "nvmf_subsystem_get_listeners", 00:05:01.970 "nvmf_subsystem_get_qpairs", 00:05:01.970 "nvmf_subsystem_get_controllers", 00:05:01.970 "nvmf_get_stats", 00:05:01.970 "nvmf_get_transports", 00:05:01.970 "nvmf_create_transport", 00:05:01.970 "nvmf_get_targets", 00:05:01.970 "nvmf_delete_target", 00:05:01.970 "nvmf_create_target", 00:05:01.970 "nvmf_subsystem_allow_any_host", 00:05:01.970 "nvmf_subsystem_set_keys", 00:05:01.970 "nvmf_subsystem_remove_host", 00:05:01.970 "nvmf_subsystem_add_host", 00:05:01.970 "nvmf_ns_remove_host", 00:05:01.970 "nvmf_ns_add_host", 00:05:01.970 "nvmf_subsystem_remove_ns", 00:05:01.970 "nvmf_subsystem_set_ns_ana_group", 00:05:01.970 "nvmf_subsystem_add_ns", 00:05:01.970 "nvmf_subsystem_listener_set_ana_state", 00:05:01.970 "nvmf_discovery_get_referrals", 00:05:01.970 "nvmf_discovery_remove_referral", 00:05:01.970 "nvmf_discovery_add_referral", 00:05:01.970 "nvmf_subsystem_remove_listener", 00:05:01.970 "nvmf_subsystem_add_listener", 00:05:01.970 "nvmf_delete_subsystem", 00:05:01.970 "nvmf_create_subsystem", 00:05:01.970 "nvmf_get_subsystems", 00:05:01.970 "env_dpdk_get_mem_stats", 00:05:01.970 "nbd_get_disks", 00:05:01.970 "nbd_stop_disk", 00:05:01.970 "nbd_start_disk", 00:05:01.970 "ublk_recover_disk", 00:05:01.970 "ublk_get_disks", 00:05:01.970 "ublk_stop_disk", 00:05:01.970 "ublk_start_disk", 00:05:01.970 "ublk_destroy_target", 00:05:01.970 "ublk_create_target", 00:05:01.971 "virtio_blk_create_transport", 00:05:01.971 "virtio_blk_get_transports", 00:05:01.971 "vhost_controller_set_coalescing", 00:05:01.971 "vhost_get_controllers", 00:05:01.971 "vhost_delete_controller", 00:05:01.971 "vhost_create_blk_controller", 00:05:01.971 "vhost_scsi_controller_remove_target", 00:05:01.971 "vhost_scsi_controller_add_target", 00:05:01.971 "vhost_start_scsi_controller", 00:05:01.971 "vhost_create_scsi_controller", 00:05:01.971 "thread_set_cpumask", 00:05:01.971 "scheduler_set_options", 00:05:01.971 "framework_get_governor", 00:05:01.971 "framework_get_scheduler", 00:05:01.971 "framework_set_scheduler", 00:05:01.971 "framework_get_reactors", 00:05:01.971 "thread_get_io_channels", 00:05:01.971 "thread_get_pollers", 00:05:01.971 "thread_get_stats", 00:05:01.971 "framework_monitor_context_switch", 00:05:01.971 "spdk_kill_instance", 00:05:01.971 "log_enable_timestamps", 00:05:01.971 "log_get_flags", 00:05:01.971 "log_clear_flag", 00:05:01.971 "log_set_flag", 00:05:01.971 "log_get_level", 00:05:01.971 "log_set_level", 00:05:01.971 "log_get_print_level", 00:05:01.971 "log_set_print_level", 00:05:01.971 "framework_enable_cpumask_locks", 00:05:01.971 "framework_disable_cpumask_locks", 00:05:01.971 "framework_wait_init", 00:05:01.971 "framework_start_init", 00:05:01.971 "scsi_get_devices", 00:05:01.971 "bdev_get_histogram", 00:05:01.971 "bdev_enable_histogram", 00:05:01.971 "bdev_set_qos_limit", 00:05:01.971 "bdev_set_qd_sampling_period", 00:05:01.971 "bdev_get_bdevs", 00:05:01.971 "bdev_reset_iostat", 00:05:01.971 "bdev_get_iostat", 00:05:01.971 "bdev_examine", 00:05:01.971 "bdev_wait_for_examine", 00:05:01.971 "bdev_set_options", 00:05:01.971 "accel_get_stats", 00:05:01.971 "accel_set_options", 00:05:01.971 "accel_set_driver", 00:05:01.971 "accel_crypto_key_destroy", 00:05:01.971 "accel_crypto_keys_get", 00:05:01.971 "accel_crypto_key_create", 00:05:01.971 "accel_assign_opc", 00:05:01.971 "accel_get_module_info", 00:05:01.971 "accel_get_opc_assignments", 00:05:01.971 "vmd_rescan", 00:05:01.971 "vmd_remove_device", 00:05:01.971 "vmd_enable", 00:05:01.971 "sock_get_default_impl", 00:05:01.971 "sock_set_default_impl", 00:05:01.971 "sock_impl_set_options", 00:05:01.971 "sock_impl_get_options", 00:05:01.971 "iobuf_get_stats", 00:05:01.971 "iobuf_set_options", 00:05:01.971 "keyring_get_keys", 00:05:01.971 "vfu_tgt_set_base_path", 00:05:01.971 "framework_get_pci_devices", 00:05:01.971 "framework_get_config", 00:05:01.971 "framework_get_subsystems", 00:05:01.971 "fsdev_set_opts", 00:05:01.971 "fsdev_get_opts", 00:05:01.971 "trace_get_info", 00:05:01.971 "trace_get_tpoint_group_mask", 00:05:01.971 "trace_disable_tpoint_group", 00:05:01.971 "trace_enable_tpoint_group", 00:05:01.971 "trace_clear_tpoint_mask", 00:05:01.971 "trace_set_tpoint_mask", 00:05:01.971 "notify_get_notifications", 00:05:01.971 "notify_get_types", 00:05:01.971 "spdk_get_version", 00:05:01.971 "rpc_get_methods" 00:05:01.971 ] 00:05:01.971 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.971 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.971 04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 281333 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 281333 ']' 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 281333 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281333 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281333' 00:05:01.971 killing process with pid 281333 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 281333 00:05:01.971 04:59:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 281333 00:05:02.541 00:05:02.541 real 0m1.730s 00:05:02.541 user 0m3.133s 00:05:02.541 sys 0m0.536s 00:05:02.541 04:59:44 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.541 04:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.541 ************************************ 00:05:02.541 END TEST spdkcli_tcp 00:05:02.541 ************************************ 00:05:02.541 04:59:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.541 04:59:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.541 04:59:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.541 04:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.541 ************************************ 00:05:02.541 START TEST dpdk_mem_utility 00:05:02.541 ************************************ 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.541 * Looking for test storage... 00:05:02.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.541 04:59:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.541 --rc genhtml_branch_coverage=1 00:05:02.541 --rc genhtml_function_coverage=1 00:05:02.541 --rc genhtml_legend=1 00:05:02.541 --rc geninfo_all_blocks=1 00:05:02.541 --rc geninfo_unexecuted_blocks=1 00:05:02.541 00:05:02.541 ' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.541 --rc genhtml_branch_coverage=1 00:05:02.541 --rc genhtml_function_coverage=1 00:05:02.541 --rc genhtml_legend=1 00:05:02.541 --rc geninfo_all_blocks=1 00:05:02.541 --rc geninfo_unexecuted_blocks=1 00:05:02.541 00:05:02.541 ' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.541 --rc genhtml_branch_coverage=1 00:05:02.541 --rc genhtml_function_coverage=1 00:05:02.541 --rc genhtml_legend=1 00:05:02.541 --rc geninfo_all_blocks=1 00:05:02.541 --rc geninfo_unexecuted_blocks=1 00:05:02.541 00:05:02.541 ' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.541 --rc genhtml_branch_coverage=1 00:05:02.541 --rc genhtml_function_coverage=1 00:05:02.541 --rc genhtml_legend=1 00:05:02.541 --rc geninfo_all_blocks=1 00:05:02.541 --rc geninfo_unexecuted_blocks=1 00:05:02.541 00:05:02.541 ' 00:05:02.541 04:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.541 04:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=281779 00:05:02.541 04:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 281779 00:05:02.541 04:59:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 281779 ']' 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.541 04:59:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.805 [2024-12-09 04:59:45.051463] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:02.805 [2024-12-09 04:59:45.051519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid281779 ] 00:05:02.805 [2024-12-09 04:59:45.144611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.805 [2024-12-09 04:59:45.187147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.745 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.745 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:03.745 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.745 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.745 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.745 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.745 { 00:05:03.745 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.745 } 00:05:03.745 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.745 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.745 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:03.745 1 heaps totaling size 818.000000 MiB 00:05:03.745 size: 818.000000 MiB heap id: 0 00:05:03.745 end heaps---------- 00:05:03.745 9 mempools totaling size 603.782043 MiB 00:05:03.745 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.745 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.745 size: 100.555481 MiB name: bdev_io_281779 00:05:03.745 size: 50.003479 MiB name: msgpool_281779 00:05:03.745 size: 36.509338 MiB name: fsdev_io_281779 00:05:03.745 size: 21.763794 MiB name: PDU_Pool 00:05:03.745 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.745 size: 4.133484 MiB name: evtpool_281779 00:05:03.745 size: 0.026123 MiB name: Session_Pool 00:05:03.745 end mempools------- 00:05:03.745 6 memzones totaling size 4.142822 MiB 00:05:03.745 size: 1.000366 MiB name: RG_ring_0_281779 00:05:03.745 size: 1.000366 MiB name: RG_ring_1_281779 00:05:03.745 size: 1.000366 MiB name: RG_ring_4_281779 00:05:03.745 size: 1.000366 MiB name: RG_ring_5_281779 00:05:03.745 size: 0.125366 MiB name: RG_ring_2_281779 00:05:03.745 size: 0.015991 MiB name: RG_ring_3_281779 00:05:03.745 end memzones------- 00:05:03.745 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:03.745 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:03.745 list of free elements. size: 10.852478 MiB 00:05:03.745 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:03.745 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:03.745 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:03.745 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:03.745 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:03.745 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:03.745 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:03.745 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:03.745 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:03.745 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:03.745 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:03.745 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:03.745 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:03.745 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:03.745 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:03.745 list of standard malloc elements. size: 199.218628 MiB 00:05:03.745 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:03.745 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:03.745 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:03.745 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:03.745 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:03.745 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:03.745 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:03.745 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:03.745 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:03.745 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:03.745 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:03.746 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:03.746 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:03.746 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:03.746 list of memzone associated elements. size: 607.928894 MiB 00:05:03.746 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:03.746 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:03.746 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:03.746 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:03.746 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:03.746 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_281779_0 00:05:03.746 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:03.746 associated memzone info: size: 48.002930 MiB name: MP_msgpool_281779_0 00:05:03.746 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:03.746 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_281779_0 00:05:03.746 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:03.746 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:03.746 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:03.746 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:03.746 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:03.746 associated memzone info: size: 3.000122 MiB name: MP_evtpool_281779_0 00:05:03.746 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:03.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_281779 00:05:03.746 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:03.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_281779 00:05:03.746 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:03.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:03.746 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:03.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:03.746 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:03.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:03.746 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:03.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:03.746 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:03.746 associated memzone info: size: 1.000366 MiB name: RG_ring_0_281779 00:05:03.746 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:03.746 associated memzone info: size: 1.000366 MiB name: RG_ring_1_281779 00:05:03.746 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:03.746 associated memzone info: size: 1.000366 MiB name: RG_ring_4_281779 00:05:03.746 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:03.746 associated memzone info: size: 1.000366 MiB name: RG_ring_5_281779 00:05:03.746 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:03.746 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_281779 00:05:03.746 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:03.746 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_281779 00:05:03.746 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:03.746 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:03.746 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:03.746 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:03.746 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:03.746 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:03.746 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:03.746 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_281779 00:05:03.746 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:03.746 associated memzone info: size: 0.125366 MiB name: RG_ring_2_281779 00:05:03.746 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:03.746 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:03.746 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:03.746 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:03.746 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:03.746 associated memzone info: size: 0.015991 MiB name: RG_ring_3_281779 00:05:03.746 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:03.746 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:03.746 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:03.746 associated memzone info: size: 0.000183 MiB name: MP_msgpool_281779 00:05:03.746 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:03.746 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_281779 00:05:03.746 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:03.746 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_281779 00:05:03.746 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:03.746 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:03.746 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:03.746 04:59:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 281779 00:05:03.746 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 281779 ']' 00:05:03.746 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 281779 00:05:03.746 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:03.746 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.746 04:59:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 281779 00:05:03.746 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.746 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.746 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 281779' 00:05:03.746 killing process with pid 281779 00:05:03.746 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 281779 00:05:03.746 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 281779 00:05:04.006 00:05:04.006 real 0m1.556s 00:05:04.006 user 0m1.603s 00:05:04.006 sys 0m0.466s 00:05:04.006 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.006 04:59:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.006 ************************************ 00:05:04.006 END TEST dpdk_mem_utility 00:05:04.006 ************************************ 00:05:04.006 04:59:46 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.006 04:59:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.006 04:59:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.006 04:59:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.006 ************************************ 00:05:04.006 START TEST event 00:05:04.006 ************************************ 00:05:04.006 04:59:46 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.267 * Looking for test storage... 00:05:04.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.267 04:59:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.267 04:59:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.267 04:59:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.267 04:59:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.267 04:59:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.267 04:59:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.267 04:59:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.267 04:59:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.267 04:59:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.267 04:59:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.267 04:59:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.267 04:59:46 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.267 04:59:46 event -- scripts/common.sh@345 -- # : 1 00:05:04.267 04:59:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.267 04:59:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.267 04:59:46 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.267 04:59:46 event -- scripts/common.sh@353 -- # local d=1 00:05:04.267 04:59:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.267 04:59:46 event -- scripts/common.sh@355 -- # echo 1 00:05:04.267 04:59:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.267 04:59:46 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.267 04:59:46 event -- scripts/common.sh@353 -- # local d=2 00:05:04.267 04:59:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.267 04:59:46 event -- scripts/common.sh@355 -- # echo 2 00:05:04.267 04:59:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.267 04:59:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.267 04:59:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.267 04:59:46 event -- scripts/common.sh@368 -- # return 0 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.267 --rc genhtml_branch_coverage=1 00:05:04.267 --rc genhtml_function_coverage=1 00:05:04.267 --rc genhtml_legend=1 00:05:04.267 --rc geninfo_all_blocks=1 00:05:04.267 --rc geninfo_unexecuted_blocks=1 00:05:04.267 00:05:04.267 ' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.267 --rc genhtml_branch_coverage=1 00:05:04.267 --rc genhtml_function_coverage=1 00:05:04.267 --rc genhtml_legend=1 00:05:04.267 --rc geninfo_all_blocks=1 00:05:04.267 --rc geninfo_unexecuted_blocks=1 00:05:04.267 00:05:04.267 ' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.267 --rc genhtml_branch_coverage=1 00:05:04.267 --rc genhtml_function_coverage=1 00:05:04.267 --rc genhtml_legend=1 00:05:04.267 --rc geninfo_all_blocks=1 00:05:04.267 --rc geninfo_unexecuted_blocks=1 00:05:04.267 00:05:04.267 ' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.267 --rc genhtml_branch_coverage=1 00:05:04.267 --rc genhtml_function_coverage=1 00:05:04.267 --rc genhtml_legend=1 00:05:04.267 --rc geninfo_all_blocks=1 00:05:04.267 --rc geninfo_unexecuted_blocks=1 00:05:04.267 00:05:04.267 ' 00:05:04.267 04:59:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.267 04:59:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.267 04:59:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:04.267 04:59:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.267 04:59:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.267 ************************************ 00:05:04.267 START TEST event_perf 00:05:04.267 ************************************ 00:05:04.267 04:59:46 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.267 Running I/O for 1 seconds...[2024-12-09 04:59:46.696914] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:04.268 [2024-12-09 04:59:46.696992] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282106 ] 00:05:04.540 [2024-12-09 04:59:46.793368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.540 [2024-12-09 04:59:46.835412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.540 [2024-12-09 04:59:46.835524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.540 [2024-12-09 04:59:46.835631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.540 [2024-12-09 04:59:46.835632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.540 Running I/O for 1 seconds... 00:05:05.540 lcore 0: 207798 00:05:05.540 lcore 1: 207796 00:05:05.540 lcore 2: 207797 00:05:05.540 lcore 3: 207797 00:05:05.540 done. 00:05:05.540 00:05:05.540 real 0m1.233s 00:05:05.540 user 0m4.135s 00:05:05.540 sys 0m0.094s 00:05:05.540 04:59:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.540 04:59:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.540 ************************************ 00:05:05.540 END TEST event_perf 00:05:05.540 ************************************ 00:05:05.540 04:59:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.540 04:59:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.540 04:59:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.540 04:59:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.540 ************************************ 00:05:05.540 START TEST event_reactor 00:05:05.540 ************************************ 00:05:05.540 04:59:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.799 [2024-12-09 04:59:48.019926] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:05.799 [2024-12-09 04:59:48.020009] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282391 ] 00:05:05.799 [2024-12-09 04:59:48.117161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.799 [2024-12-09 04:59:48.156782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.179 test_start 00:05:07.179 oneshot 00:05:07.179 tick 100 00:05:07.179 tick 100 00:05:07.179 tick 250 00:05:07.179 tick 100 00:05:07.179 tick 100 00:05:07.179 tick 100 00:05:07.179 tick 250 00:05:07.179 tick 500 00:05:07.179 tick 100 00:05:07.179 tick 100 00:05:07.179 tick 250 00:05:07.179 tick 100 00:05:07.179 tick 100 00:05:07.179 test_end 00:05:07.179 00:05:07.179 real 0m1.236s 00:05:07.179 user 0m1.129s 00:05:07.179 sys 0m0.102s 00:05:07.179 04:59:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.179 04:59:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.179 ************************************ 00:05:07.179 END TEST event_reactor 00:05:07.179 ************************************ 00:05:07.179 04:59:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.179 04:59:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.179 04:59:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.179 04:59:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.179 ************************************ 00:05:07.179 START TEST event_reactor_perf 00:05:07.179 ************************************ 00:05:07.179 04:59:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.179 [2024-12-09 04:59:49.337883] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:07.179 [2024-12-09 04:59:49.337967] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282564 ] 00:05:07.179 [2024-12-09 04:59:49.433275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.179 [2024-12-09 04:59:49.475968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.119 test_start 00:05:08.119 test_end 00:05:08.119 Performance: 510593 events per second 00:05:08.119 00:05:08.119 real 0m1.240s 00:05:08.119 user 0m1.147s 00:05:08.119 sys 0m0.089s 00:05:08.119 04:59:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.119 04:59:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.119 ************************************ 00:05:08.119 END TEST event_reactor_perf 00:05:08.119 ************************************ 00:05:08.379 04:59:50 event -- event/event.sh@49 -- # uname -s 00:05:08.379 04:59:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.379 04:59:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.379 04:59:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.379 04:59:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.379 04:59:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.379 ************************************ 00:05:08.379 START TEST event_scheduler 00:05:08.379 ************************************ 00:05:08.379 04:59:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.379 * Looking for test storage... 00:05:08.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.379 04:59:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.379 04:59:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.379 04:59:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.379 04:59:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.379 04:59:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.380 --rc genhtml_branch_coverage=1 00:05:08.380 --rc genhtml_function_coverage=1 00:05:08.380 --rc genhtml_legend=1 00:05:08.380 --rc geninfo_all_blocks=1 00:05:08.380 --rc geninfo_unexecuted_blocks=1 00:05:08.380 00:05:08.380 ' 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.380 --rc genhtml_branch_coverage=1 00:05:08.380 --rc genhtml_function_coverage=1 00:05:08.380 --rc genhtml_legend=1 00:05:08.380 --rc geninfo_all_blocks=1 00:05:08.380 --rc geninfo_unexecuted_blocks=1 00:05:08.380 00:05:08.380 ' 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.380 --rc genhtml_branch_coverage=1 00:05:08.380 --rc genhtml_function_coverage=1 00:05:08.380 --rc genhtml_legend=1 00:05:08.380 --rc geninfo_all_blocks=1 00:05:08.380 --rc geninfo_unexecuted_blocks=1 00:05:08.380 00:05:08.380 ' 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.380 --rc genhtml_branch_coverage=1 00:05:08.380 --rc genhtml_function_coverage=1 00:05:08.380 --rc genhtml_legend=1 00:05:08.380 --rc geninfo_all_blocks=1 00:05:08.380 --rc geninfo_unexecuted_blocks=1 00:05:08.380 00:05:08.380 ' 00:05:08.380 04:59:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.380 04:59:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=282885 00:05:08.380 04:59:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.380 04:59:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.380 04:59:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 282885 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 282885 ']' 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.380 04:59:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.640 [2024-12-09 04:59:50.888353] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:08.640 [2024-12-09 04:59:50.888412] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282885 ] 00:05:08.640 [2024-12-09 04:59:50.980172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.640 [2024-12-09 04:59:51.024397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.640 [2024-12-09 04:59:51.024510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.640 [2024-12-09 04:59:51.024605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.640 [2024-12-09 04:59:51.024608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.579 04:59:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.579 04:59:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.579 04:59:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.579 04:59:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.579 04:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.579 [2024-12-09 04:59:51.735120] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.579 [2024-12-09 04:59:51.735140] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.579 [2024-12-09 04:59:51.735151] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.579 [2024-12-09 04:59:51.735158] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.579 [2024-12-09 04:59:51.735168] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 [2024-12-09 04:59:51.810883] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 ************************************ 00:05:09.580 START TEST scheduler_create_thread 00:05:09.580 ************************************ 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 2 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 3 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 4 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 5 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 6 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 7 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 8 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 9 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 10 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 04:59:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.521 04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.521 04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.521 04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.521 04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.898 04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.898 04:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.898 04:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.898 04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.898 04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.833 04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.833 00:05:12.833 real 0m3.382s 00:05:12.833 user 0m0.024s 00:05:12.833 sys 0m0.007s 00:05:12.833 04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.833 04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.833 ************************************ 00:05:12.833 END TEST scheduler_create_thread 00:05:12.833 ************************************ 00:05:12.833 04:59:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.833 04:59:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 282885 00:05:12.833 04:59:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 282885 ']' 00:05:12.833 04:59:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 282885 00:05:12.833 04:59:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.833 04:59:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.833 04:59:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282885 00:05:13.090 04:59:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.091 04:59:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.091 04:59:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282885' 00:05:13.091 killing process with pid 282885 00:05:13.091 04:59:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 282885 00:05:13.091 04:59:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 282885 00:05:13.350 [2024-12-09 04:59:55.614879] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.609 00:05:13.609 real 0m5.218s 00:05:13.609 user 0m10.680s 00:05:13.609 sys 0m0.480s 00:05:13.609 04:59:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.609 04:59:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.609 ************************************ 00:05:13.609 END TEST event_scheduler 00:05:13.609 ************************************ 00:05:13.609 04:59:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.610 04:59:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.610 04:59:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.610 04:59:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.610 04:59:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.610 ************************************ 00:05:13.610 START TEST app_repeat 00:05:13.610 ************************************ 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=283859 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 283859' 00:05:13.610 Process app_repeat pid: 283859 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.610 spdk_app_start Round 0 00:05:13.610 04:59:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 283859 /var/tmp/spdk-nbd.sock 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 283859 ']' 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.610 04:59:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.610 [2024-12-09 04:59:55.994163] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:13.610 [2024-12-09 04:59:55.994236] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283859 ] 00:05:13.868 [2024-12-09 04:59:56.089437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.868 [2024-12-09 04:59:56.128695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.868 [2024-12-09 04:59:56.128696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.868 04:59:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.868 04:59:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.868 04:59:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.127 Malloc0 00:05:14.127 04:59:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.388 Malloc1 00:05:14.388 04:59:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.388 04:59:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.647 /dev/nbd0 00:05:14.647 04:59:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.647 04:59:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.647 1+0 records in 00:05:14.647 1+0 records out 00:05:14.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022538 s, 18.2 MB/s 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.647 04:59:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.647 04:59:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.647 04:59:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.647 04:59:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.906 /dev/nbd1 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.906 1+0 records in 00:05:14.906 1+0 records out 00:05:14.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251314 s, 16.3 MB/s 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.906 04:59:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.906 04:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.166 { 00:05:15.166 "nbd_device": "/dev/nbd0", 00:05:15.166 "bdev_name": "Malloc0" 00:05:15.166 }, 00:05:15.166 { 00:05:15.166 "nbd_device": "/dev/nbd1", 00:05:15.166 "bdev_name": "Malloc1" 00:05:15.166 } 00:05:15.166 ]' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.166 { 00:05:15.166 "nbd_device": "/dev/nbd0", 00:05:15.166 "bdev_name": "Malloc0" 00:05:15.166 }, 00:05:15.166 { 00:05:15.166 "nbd_device": "/dev/nbd1", 00:05:15.166 "bdev_name": "Malloc1" 00:05:15.166 } 00:05:15.166 ]' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.166 /dev/nbd1' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.166 /dev/nbd1' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.166 256+0 records in 00:05:15.166 256+0 records out 00:05:15.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111119 s, 94.4 MB/s 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.166 256+0 records in 00:05:15.166 256+0 records out 00:05:15.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192779 s, 54.4 MB/s 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.166 256+0 records in 00:05:15.166 256+0 records out 00:05:15.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204041 s, 51.4 MB/s 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.166 04:59:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.425 04:59:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.684 04:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.684 04:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.684 04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.684 04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.944 04:59:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.944 04:59:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.944 04:59:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.203 [2024-12-09 04:59:58.571204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.203 [2024-12-09 04:59:58.606532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.203 [2024-12-09 04:59:58.606532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.203 [2024-12-09 04:59:58.646950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.203 [2024-12-09 04:59:58.646993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.494 05:00:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.494 05:00:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.494 spdk_app_start Round 1 00:05:19.494 05:00:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 283859 /var/tmp/spdk-nbd.sock 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 283859 ']' 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.494 05:00:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.494 05:00:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.494 Malloc0 00:05:19.494 05:00:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.753 Malloc1 00:05:19.753 05:00:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.753 05:00:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.011 /dev/nbd0 00:05:20.011 05:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.011 05:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.011 1+0 records in 00:05:20.011 1+0 records out 00:05:20.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217095 s, 18.9 MB/s 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.011 05:00:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.011 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.011 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.011 05:00:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.011 /dev/nbd1 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.269 1+0 records in 00:05:20.269 1+0 records out 00:05:20.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266427 s, 15.4 MB/s 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.269 05:00:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.269 { 00:05:20.269 "nbd_device": "/dev/nbd0", 00:05:20.269 "bdev_name": "Malloc0" 00:05:20.269 }, 00:05:20.269 { 00:05:20.269 "nbd_device": "/dev/nbd1", 00:05:20.269 "bdev_name": "Malloc1" 00:05:20.269 } 00:05:20.269 ]' 00:05:20.269 05:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.269 { 00:05:20.269 "nbd_device": "/dev/nbd0", 00:05:20.270 "bdev_name": "Malloc0" 00:05:20.270 }, 00:05:20.270 { 00:05:20.270 "nbd_device": "/dev/nbd1", 00:05:20.270 "bdev_name": "Malloc1" 00:05:20.270 } 00:05:20.270 ]' 00:05:20.270 05:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.528 05:00:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.528 /dev/nbd1' 00:05:20.528 05:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.528 /dev/nbd1' 00:05:20.528 05:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.529 256+0 records in 00:05:20.529 256+0 records out 00:05:20.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109969 s, 95.4 MB/s 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.529 256+0 records in 00:05:20.529 256+0 records out 00:05:20.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134462 s, 78.0 MB/s 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.529 256+0 records in 00:05:20.529 256+0 records out 00:05:20.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207302 s, 50.6 MB/s 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.529 05:00:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.787 05:00:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.045 05:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.046 05:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.046 05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.046 05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.304 05:00:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.304 05:00:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.304 05:00:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.564 [2024-12-09 05:00:03.911701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.564 [2024-12-09 05:00:03.946179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.564 [2024-12-09 05:00:03.946179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.564 [2024-12-09 05:00:03.987643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.564 [2024-12-09 05:00:03.987686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.854 05:00:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.854 05:00:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.854 spdk_app_start Round 2 00:05:24.854 05:00:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 283859 /var/tmp/spdk-nbd.sock 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 283859 ']' 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.854 05:00:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.854 05:00:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.854 Malloc0 00:05:24.854 05:00:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.112 Malloc1 00:05:25.112 05:00:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.112 05:00:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.112 /dev/nbd0 00:05:25.372 05:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.372 05:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.372 1+0 records in 00:05:25.372 1+0 records out 00:05:25.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025445 s, 16.1 MB/s 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.372 05:00:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.372 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.372 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.372 05:00:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.372 /dev/nbd1 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.632 1+0 records in 00:05:25.632 1+0 records out 00:05:25.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247862 s, 16.5 MB/s 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.632 05:00:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.632 05:00:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.632 05:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.632 { 00:05:25.632 "nbd_device": "/dev/nbd0", 00:05:25.632 "bdev_name": "Malloc0" 00:05:25.632 }, 00:05:25.632 { 00:05:25.632 "nbd_device": "/dev/nbd1", 00:05:25.632 "bdev_name": "Malloc1" 00:05:25.632 } 00:05:25.632 ]' 00:05:25.632 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.632 { 00:05:25.632 "nbd_device": "/dev/nbd0", 00:05:25.632 "bdev_name": "Malloc0" 00:05:25.632 }, 00:05:25.632 { 00:05:25.632 "nbd_device": "/dev/nbd1", 00:05:25.632 "bdev_name": "Malloc1" 00:05:25.632 } 00:05:25.632 ]' 00:05:25.632 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.891 /dev/nbd1' 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.891 /dev/nbd1' 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.891 256+0 records in 00:05:25.891 256+0 records out 00:05:25.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114971 s, 91.2 MB/s 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.891 256+0 records in 00:05:25.891 256+0 records out 00:05:25.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019018 s, 55.1 MB/s 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.891 256+0 records in 00:05:25.891 256+0 records out 00:05:25.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173275 s, 60.5 MB/s 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.891 05:00:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.892 05:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.150 05:00:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.409 05:00:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.410 05:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.410 05:00:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.410 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.410 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.669 05:00:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.669 05:00:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.669 05:00:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.928 [2024-12-09 05:00:09.262705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.928 [2024-12-09 05:00:09.298705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.928 [2024-12-09 05:00:09.298704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.928 [2024-12-09 05:00:09.339643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.928 [2024-12-09 05:00:09.339687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.220 05:00:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 283859 /var/tmp/spdk-nbd.sock 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 283859 ']' 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.220 05:00:12 event.app_repeat -- event/event.sh@39 -- # killprocess 283859 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 283859 ']' 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 283859 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283859 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283859' 00:05:30.220 killing process with pid 283859 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 283859 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 283859 00:05:30.220 spdk_app_start is called in Round 0. 00:05:30.220 Shutdown signal received, stop current app iteration 00:05:30.220 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 00:05:30.220 spdk_app_start is called in Round 1. 00:05:30.220 Shutdown signal received, stop current app iteration 00:05:30.220 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 00:05:30.220 spdk_app_start is called in Round 2. 00:05:30.220 Shutdown signal received, stop current app iteration 00:05:30.220 Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 reinitialization... 00:05:30.220 spdk_app_start is called in Round 3. 00:05:30.220 Shutdown signal received, stop current app iteration 00:05:30.220 05:00:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.220 05:00:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.220 00:05:30.220 real 0m16.570s 00:05:30.220 user 0m35.871s 00:05:30.220 sys 0m3.076s 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.220 05:00:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.220 ************************************ 00:05:30.220 END TEST app_repeat 00:05:30.220 ************************************ 00:05:30.220 05:00:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.220 05:00:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.220 05:00:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.220 05:00:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.220 05:00:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.220 ************************************ 00:05:30.220 START TEST cpu_locks 00:05:30.220 ************************************ 00:05:30.220 05:00:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.479 * Looking for test storage... 00:05:30.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.479 05:00:12 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.479 05:00:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.479 05:00:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.479 05:00:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.480 05:00:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.480 --rc genhtml_branch_coverage=1 00:05:30.480 --rc genhtml_function_coverage=1 00:05:30.480 --rc genhtml_legend=1 00:05:30.480 --rc geninfo_all_blocks=1 00:05:30.480 --rc geninfo_unexecuted_blocks=1 00:05:30.480 00:05:30.480 ' 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.480 --rc genhtml_branch_coverage=1 00:05:30.480 --rc genhtml_function_coverage=1 00:05:30.480 --rc genhtml_legend=1 00:05:30.480 --rc geninfo_all_blocks=1 00:05:30.480 --rc geninfo_unexecuted_blocks=1 00:05:30.480 00:05:30.480 ' 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.480 --rc genhtml_branch_coverage=1 00:05:30.480 --rc genhtml_function_coverage=1 00:05:30.480 --rc genhtml_legend=1 00:05:30.480 --rc geninfo_all_blocks=1 00:05:30.480 --rc geninfo_unexecuted_blocks=1 00:05:30.480 00:05:30.480 ' 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.480 --rc genhtml_branch_coverage=1 00:05:30.480 --rc genhtml_function_coverage=1 00:05:30.480 --rc genhtml_legend=1 00:05:30.480 --rc geninfo_all_blocks=1 00:05:30.480 --rc geninfo_unexecuted_blocks=1 00:05:30.480 00:05:30.480 ' 00:05:30.480 05:00:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:30.480 05:00:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:30.480 05:00:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:30.480 05:00:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.480 05:00:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.480 ************************************ 00:05:30.480 START TEST default_locks 00:05:30.480 ************************************ 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=287581 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 287581 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 287581 ']' 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.480 05:00:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.480 [2024-12-09 05:00:12.889134] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:30.480 [2024-12-09 05:00:12.889180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287581 ] 00:05:30.739 [2024-12-09 05:00:12.979775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.739 [2024-12-09 05:00:13.017402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.308 05:00:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.308 05:00:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:31.308 05:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 287581 00:05:31.308 05:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.308 05:00:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 287581 00:05:31.876 lslocks: write error 00:05:31.876 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 287581 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 287581 ']' 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 287581 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287581 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287581' 00:05:32.136 killing process with pid 287581 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 287581 00:05:32.136 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 287581 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 287581 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 287581 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 287581 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 287581 ']' 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (287581) - No such process 00:05:32.396 ERROR: process (pid: 287581) is no longer running 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.396 00:05:32.396 real 0m1.924s 00:05:32.396 user 0m2.041s 00:05:32.396 sys 0m0.689s 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.396 05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.396 ************************************ 00:05:32.396 END TEST default_locks 00:05:32.396 ************************************ 00:05:32.396 05:00:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.396 05:00:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.396 05:00:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.396 05:00:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.396 ************************************ 00:05:32.396 START TEST default_locks_via_rpc 00:05:32.396 ************************************ 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=287890 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 287890 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 287890 ']' 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.396 05:00:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.655 [2024-12-09 05:00:14.899648] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:32.655 [2024-12-09 05:00:14.899697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287890 ] 00:05:32.655 [2024-12-09 05:00:14.993520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.655 [2024-12-09 05:00:15.035694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 287890 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 287890 00:05:33.610 05:00:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 287890 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 287890 ']' 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 287890 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287890 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287890' 00:05:33.868 killing process with pid 287890 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 287890 00:05:33.868 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 287890 00:05:34.435 00:05:34.435 real 0m1.768s 00:05:34.435 user 0m1.863s 00:05:34.435 sys 0m0.618s 00:05:34.435 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.435 05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.435 ************************************ 00:05:34.435 END TEST default_locks_via_rpc 00:05:34.435 ************************************ 00:05:34.436 05:00:16 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.436 05:00:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.436 05:00:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.436 05:00:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.436 ************************************ 00:05:34.436 START TEST non_locking_app_on_locked_coremask 00:05:34.436 ************************************ 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=288195 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 288195 /var/tmp/spdk.sock 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 288195 ']' 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.436 05:00:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.436 [2024-12-09 05:00:16.734366] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:34.436 [2024-12-09 05:00:16.734411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288195 ] 00:05:34.436 [2024-12-09 05:00:16.826571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.436 [2024-12-09 05:00:16.866527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=288452 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 288452 /var/tmp/spdk2.sock 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 288452 ']' 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.373 05:00:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.373 [2024-12-09 05:00:17.619903] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:35.373 [2024-12-09 05:00:17.619954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288452 ] 00:05:35.373 [2024-12-09 05:00:17.729513] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.373 [2024-12-09 05:00:17.729538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.373 [2024-12-09 05:00:17.807902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.310 05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.310 05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.310 05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 288195 00:05:36.310 05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 288195 00:05:36.310 05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.877 lslocks: write error 00:05:36.877 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 288195 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 288195 ']' 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 288195 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288195 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288195' 00:05:36.878 killing process with pid 288195 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 288195 00:05:36.878 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 288195 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 288452 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 288452 ']' 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 288452 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288452 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288452' 00:05:37.449 killing process with pid 288452 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 288452 00:05:37.449 05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 288452 00:05:38.019 00:05:38.019 real 0m3.519s 00:05:38.019 user 0m3.796s 00:05:38.019 sys 0m1.079s 00:05:38.019 05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.019 05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.019 ************************************ 00:05:38.019 END TEST non_locking_app_on_locked_coremask 00:05:38.019 ************************************ 00:05:38.019 05:00:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:38.019 05:00:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.019 05:00:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.019 05:00:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.019 ************************************ 00:05:38.019 START TEST locking_app_on_unlocked_coremask 00:05:38.019 ************************************ 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=289014 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 289014 /var/tmp/spdk.sock 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 289014 ']' 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.019 05:00:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.019 [2024-12-09 05:00:20.343182] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:38.019 [2024-12-09 05:00:20.343246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289014 ] 00:05:38.019 [2024-12-09 05:00:20.435718] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.019 [2024-12-09 05:00:20.435746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.019 [2024-12-09 05:00:20.477407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=289035 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 289035 /var/tmp/spdk2.sock 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 289035 ']' 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.959 05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.959 [2024-12-09 05:00:21.226797] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:38.959 [2024-12-09 05:00:21.226847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289035 ] 00:05:38.959 [2024-12-09 05:00:21.339399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.959 [2024-12-09 05:00:21.418857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.899 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.899 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.899 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 289035 00:05:39.899 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 289035 00:05:39.899 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.469 lslocks: write error 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 289014 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 289014 ']' 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 289014 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.469 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289014 00:05:40.729 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.729 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.729 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289014' 00:05:40.729 killing process with pid 289014 00:05:40.729 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 289014 00:05:40.729 05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 289014 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 289035 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 289035 ']' 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 289035 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289035 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289035' 00:05:41.299 killing process with pid 289035 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 289035 00:05:41.299 05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 289035 00:05:41.561 00:05:41.561 real 0m3.729s 00:05:41.561 user 0m4.047s 00:05:41.561 sys 0m1.170s 00:05:41.561 05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.561 05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.561 ************************************ 00:05:41.561 END TEST locking_app_on_unlocked_coremask 00:05:41.561 ************************************ 00:05:41.820 05:00:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:41.820 05:00:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.820 05:00:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.820 05:00:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.820 ************************************ 00:05:41.820 START TEST locking_app_on_locked_coremask 00:05:41.820 ************************************ 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=289600 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 289600 /var/tmp/spdk.sock 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 289600 ']' 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.820 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.820 [2024-12-09 05:00:24.157130] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:41.820 [2024-12-09 05:00:24.157175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289600 ] 00:05:41.821 [2024-12-09 05:00:24.247987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.821 [2024-12-09 05:00:24.290122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=289859 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 289859 /var/tmp/spdk2.sock 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 289859 /var/tmp/spdk2.sock 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 289859 /var/tmp/spdk2.sock 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 289859 ']' 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.758 05:00:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.758 [2024-12-09 05:00:25.029790] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:42.758 [2024-12-09 05:00:25.029845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289859 ] 00:05:42.758 [2024-12-09 05:00:25.146530] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 289600 has claimed it. 00:05:42.758 [2024-12-09 05:00:25.146573] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (289859) - No such process 00:05:43.325 ERROR: process (pid: 289859) is no longer running 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 289600 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 289600 00:05:43.325 05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.893 lslocks: write error 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 289600 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 289600 ']' 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 289600 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289600 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289600' 00:05:43.893 killing process with pid 289600 00:05:43.893 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 289600 00:05:43.894 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 289600 00:05:44.154 00:05:44.154 real 0m2.461s 00:05:44.154 user 0m2.705s 00:05:44.154 sys 0m0.744s 00:05:44.154 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.154 05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.154 ************************************ 00:05:44.154 END TEST locking_app_on_locked_coremask 00:05:44.154 ************************************ 00:05:44.154 05:00:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:44.154 05:00:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.154 05:00:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.154 05:00:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.413 ************************************ 00:05:44.413 START TEST locking_overlapped_coremask 00:05:44.413 ************************************ 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=290163 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 290163 /var/tmp/spdk.sock 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 290163 ']' 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.413 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.414 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.414 05:00:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.414 [2024-12-09 05:00:26.697249] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:44.414 [2024-12-09 05:00:26.697297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290163 ] 00:05:44.414 [2024-12-09 05:00:26.773974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.414 [2024-12-09 05:00:26.816889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.414 [2024-12-09 05:00:26.816998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.414 [2024-12-09 05:00:26.816999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=290168 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 290168 /var/tmp/spdk2.sock 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 290168 /var/tmp/spdk2.sock 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 290168 /var/tmp/spdk2.sock 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 290168 ']' 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.673 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.673 [2024-12-09 05:00:27.082381] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:44.673 [2024-12-09 05:00:27.082430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290168 ] 00:05:44.933 [2024-12-09 05:00:27.197941] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 290163 has claimed it. 00:05:44.933 [2024-12-09 05:00:27.197980] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (290168) - No such process 00:05:45.504 ERROR: process (pid: 290168) is no longer running 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 290163 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 290163 ']' 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 290163 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290163 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290163' 00:05:45.504 killing process with pid 290163 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 290163 00:05:45.504 05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 290163 00:05:45.764 00:05:45.764 real 0m1.484s 00:05:45.764 user 0m3.980s 00:05:45.764 sys 0m0.454s 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.764 ************************************ 00:05:45.764 END TEST locking_overlapped_coremask 00:05:45.764 ************************************ 00:05:45.764 05:00:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.764 05:00:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.764 05:00:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.764 05:00:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.764 ************************************ 00:05:45.764 START TEST locking_overlapped_coremask_via_rpc 00:05:45.764 ************************************ 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=290458 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 290458 /var/tmp/spdk.sock 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 290458 ']' 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.764 05:00:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.023 [2024-12-09 05:00:28.269301] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:46.023 [2024-12-09 05:00:28.269349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290458 ] 00:05:46.023 [2024-12-09 05:00:28.362340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.023 [2024-12-09 05:00:28.362366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.023 [2024-12-09 05:00:28.405114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.023 [2024-12-09 05:00:28.405238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.023 [2024-12-09 05:00:28.405239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=290502 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 290502 /var/tmp/spdk2.sock 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 290502 ']' 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.962 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.962 [2024-12-09 05:00:29.149454] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:46.962 [2024-12-09 05:00:29.149505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290502 ] 00:05:46.962 [2024-12-09 05:00:29.266623] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.962 [2024-12-09 05:00:29.266658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.962 [2024-12-09 05:00:29.352681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.962 [2024-12-09 05:00:29.352799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.962 [2024-12-09 05:00:29.352800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.532 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.791 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.791 05:00:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.791 [2024-12-09 05:00:30.011284] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 290458 has claimed it. 00:05:47.791 request: 00:05:47.791 { 00:05:47.791 "method": "framework_enable_cpumask_locks", 00:05:47.791 "req_id": 1 00:05:47.791 } 00:05:47.791 Got JSON-RPC error response 00:05:47.791 response: 00:05:47.791 { 00:05:47.791 "code": -32603, 00:05:47.791 "message": "Failed to claim CPU core: 2" 00:05:47.791 } 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 290458 /var/tmp/spdk.sock 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 290458 ']' 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 290502 /var/tmp/spdk2.sock 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 290502 ']' 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.791 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.051 00:05:48.051 real 0m2.206s 00:05:48.051 user 0m0.933s 00:05:48.051 sys 0m0.201s 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.051 05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.051 ************************************ 00:05:48.051 END TEST locking_overlapped_coremask_via_rpc 00:05:48.051 ************************************ 00:05:48.051 05:00:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.051 05:00:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 290458 ]] 00:05:48.051 05:00:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 290458 00:05:48.051 05:00:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 290458 ']' 00:05:48.051 05:00:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 290458 00:05:48.051 05:00:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.051 05:00:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.051 05:00:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290458 00:05:48.311 05:00:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.311 05:00:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.311 05:00:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290458' 00:05:48.311 killing process with pid 290458 00:05:48.311 05:00:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 290458 00:05:48.311 05:00:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 290458 00:05:48.570 05:00:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 290502 ]] 00:05:48.570 05:00:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 290502 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 290502 ']' 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 290502 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290502 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290502' 00:05:48.570 killing process with pid 290502 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 290502 00:05:48.570 05:00:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 290502 00:05:48.830 05:00:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:48.830 05:00:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:48.830 05:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 290458 ]] 00:05:48.830 05:00:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 290458 00:05:48.830 05:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 290458 ']' 00:05:48.830 05:00:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 290458 00:05:48.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (290458) - No such process 00:05:48.831 05:00:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 290458 is not found' 00:05:48.831 Process with pid 290458 is not found 00:05:48.831 05:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 290502 ]] 00:05:48.831 05:00:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 290502 00:05:48.831 05:00:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 290502 ']' 00:05:48.831 05:00:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 290502 00:05:48.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (290502) - No such process 00:05:48.831 05:00:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 290502 is not found' 00:05:48.831 Process with pid 290502 is not found 00:05:48.831 05:00:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.090 00:05:49.090 real 0m18.699s 00:05:49.090 user 0m30.816s 00:05:49.090 sys 0m6.100s 00:05:49.090 05:00:31 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.090 05:00:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 ************************************ 00:05:49.090 END TEST cpu_locks 00:05:49.090 ************************************ 00:05:49.090 00:05:49.090 real 0m44.908s 00:05:49.090 user 1m24.081s 00:05:49.090 sys 0m10.407s 00:05:49.090 05:00:31 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.090 05:00:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 ************************************ 00:05:49.090 END TEST event 00:05:49.090 ************************************ 00:05:49.090 05:00:31 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.090 05:00:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.090 05:00:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.090 05:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.090 ************************************ 00:05:49.090 START TEST thread 00:05:49.090 ************************************ 00:05:49.090 05:00:31 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:49.090 * Looking for test storage... 00:05:49.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:49.090 05:00:31 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.090 05:00:31 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.090 05:00:31 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.349 05:00:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.349 05:00:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.349 05:00:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.349 05:00:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.349 05:00:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.349 05:00:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.349 05:00:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.349 05:00:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.349 05:00:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.349 05:00:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.349 05:00:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.349 05:00:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:49.349 05:00:31 thread -- scripts/common.sh@345 -- # : 1 00:05:49.349 05:00:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.349 05:00:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.349 05:00:31 thread -- scripts/common.sh@365 -- # decimal 1 00:05:49.349 05:00:31 thread -- scripts/common.sh@353 -- # local d=1 00:05:49.349 05:00:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.349 05:00:31 thread -- scripts/common.sh@355 -- # echo 1 00:05:49.349 05:00:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.349 05:00:31 thread -- scripts/common.sh@366 -- # decimal 2 00:05:49.349 05:00:31 thread -- scripts/common.sh@353 -- # local d=2 00:05:49.349 05:00:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.349 05:00:31 thread -- scripts/common.sh@355 -- # echo 2 00:05:49.349 05:00:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.349 05:00:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.349 05:00:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.349 05:00:31 thread -- scripts/common.sh@368 -- # return 0 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.349 --rc genhtml_branch_coverage=1 00:05:49.349 --rc genhtml_function_coverage=1 00:05:49.349 --rc genhtml_legend=1 00:05:49.349 --rc geninfo_all_blocks=1 00:05:49.349 --rc geninfo_unexecuted_blocks=1 00:05:49.349 00:05:49.349 ' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.349 --rc genhtml_branch_coverage=1 00:05:49.349 --rc genhtml_function_coverage=1 00:05:49.349 --rc genhtml_legend=1 00:05:49.349 --rc geninfo_all_blocks=1 00:05:49.349 --rc geninfo_unexecuted_blocks=1 00:05:49.349 00:05:49.349 ' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.349 --rc genhtml_branch_coverage=1 00:05:49.349 --rc genhtml_function_coverage=1 00:05:49.349 --rc genhtml_legend=1 00:05:49.349 --rc geninfo_all_blocks=1 00:05:49.349 --rc geninfo_unexecuted_blocks=1 00:05:49.349 00:05:49.349 ' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.349 --rc genhtml_branch_coverage=1 00:05:49.349 --rc genhtml_function_coverage=1 00:05:49.349 --rc genhtml_legend=1 00:05:49.349 --rc geninfo_all_blocks=1 00:05:49.349 --rc geninfo_unexecuted_blocks=1 00:05:49.349 00:05:49.349 ' 00:05:49.349 05:00:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.349 05:00:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.349 ************************************ 00:05:49.349 START TEST thread_poller_perf 00:05:49.349 ************************************ 00:05:49.349 05:00:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:49.349 [2024-12-09 05:00:31.684165] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:49.349 [2024-12-09 05:00:31.684239] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291114 ] 00:05:49.349 [2024-12-09 05:00:31.781361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.608 [2024-12-09 05:00:31.820040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.608 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:50.546 [2024-12-09T04:00:33.016Z] ====================================== 00:05:50.546 [2024-12-09T04:00:33.016Z] busy:2507249208 (cyc) 00:05:50.546 [2024-12-09T04:00:33.016Z] total_run_count: 424000 00:05:50.546 [2024-12-09T04:00:33.016Z] tsc_hz: 2500000000 (cyc) 00:05:50.546 [2024-12-09T04:00:33.016Z] ====================================== 00:05:50.546 [2024-12-09T04:00:33.016Z] poller_cost: 5913 (cyc), 2365 (nsec) 00:05:50.546 00:05:50.546 real 0m1.239s 00:05:50.546 user 0m1.144s 00:05:50.546 sys 0m0.090s 00:05:50.546 05:00:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.546 05:00:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.546 ************************************ 00:05:50.546 END TEST thread_poller_perf 00:05:50.546 ************************************ 00:05:50.546 05:00:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.546 05:00:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:50.546 05:00:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.546 05:00:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.546 ************************************ 00:05:50.546 START TEST thread_poller_perf 00:05:50.546 ************************************ 00:05:50.546 05:00:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:50.546 [2024-12-09 05:00:33.003222] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:50.546 [2024-12-09 05:00:33.003288] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291401 ] 00:05:50.805 [2024-12-09 05:00:33.095977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.805 [2024-12-09 05:00:33.133355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.805 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:51.742 [2024-12-09T04:00:34.212Z] ====================================== 00:05:51.742 [2024-12-09T04:00:34.212Z] busy:2501535782 (cyc) 00:05:51.742 [2024-12-09T04:00:34.212Z] total_run_count: 5654000 00:05:51.742 [2024-12-09T04:00:34.212Z] tsc_hz: 2500000000 (cyc) 00:05:51.742 [2024-12-09T04:00:34.212Z] ====================================== 00:05:51.742 [2024-12-09T04:00:34.212Z] poller_cost: 442 (cyc), 176 (nsec) 00:05:51.742 00:05:51.742 real 0m1.225s 00:05:51.742 user 0m1.130s 00:05:51.742 sys 0m0.091s 00:05:51.742 05:00:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.742 05:00:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.742 ************************************ 00:05:51.742 END TEST thread_poller_perf 00:05:51.742 ************************************ 00:05:52.001 05:00:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.002 00:05:52.002 real 0m2.824s 00:05:52.002 user 0m2.437s 00:05:52.002 sys 0m0.408s 00:05:52.002 05:00:34 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.002 05:00:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 ************************************ 00:05:52.002 END TEST thread 00:05:52.002 ************************************ 00:05:52.002 05:00:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:52.002 05:00:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:52.002 05:00:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.002 05:00:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.002 05:00:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.002 ************************************ 00:05:52.002 START TEST app_cmdline 00:05:52.002 ************************************ 00:05:52.002 05:00:34 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:52.002 * Looking for test storage... 00:05:52.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:52.002 05:00:34 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.002 05:00:34 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.002 05:00:34 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.260 05:00:34 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.260 05:00:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.261 05:00:34 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.261 --rc genhtml_branch_coverage=1 00:05:52.261 --rc genhtml_function_coverage=1 00:05:52.261 --rc genhtml_legend=1 00:05:52.261 --rc geninfo_all_blocks=1 00:05:52.261 --rc geninfo_unexecuted_blocks=1 00:05:52.261 00:05:52.261 ' 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.261 --rc genhtml_branch_coverage=1 00:05:52.261 --rc genhtml_function_coverage=1 00:05:52.261 --rc genhtml_legend=1 00:05:52.261 --rc geninfo_all_blocks=1 00:05:52.261 --rc geninfo_unexecuted_blocks=1 00:05:52.261 00:05:52.261 ' 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.261 --rc genhtml_branch_coverage=1 00:05:52.261 --rc genhtml_function_coverage=1 00:05:52.261 --rc genhtml_legend=1 00:05:52.261 --rc geninfo_all_blocks=1 00:05:52.261 --rc geninfo_unexecuted_blocks=1 00:05:52.261 00:05:52.261 ' 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.261 --rc genhtml_branch_coverage=1 00:05:52.261 --rc genhtml_function_coverage=1 00:05:52.261 --rc genhtml_legend=1 00:05:52.261 --rc geninfo_all_blocks=1 00:05:52.261 --rc geninfo_unexecuted_blocks=1 00:05:52.261 00:05:52.261 ' 00:05:52.261 05:00:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:52.261 05:00:34 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:52.261 05:00:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=291735 00:05:52.261 05:00:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 291735 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 291735 ']' 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.261 05:00:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 [2024-12-09 05:00:34.581377] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:05:52.261 [2024-12-09 05:00:34.581425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291735 ] 00:05:52.261 [2024-12-09 05:00:34.674994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.261 [2024-12-09 05:00:34.714936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:53.201 { 00:05:53.201 "version": "SPDK v25.01-pre git sha1 cabd61f7f", 00:05:53.201 "fields": { 00:05:53.201 "major": 25, 00:05:53.201 "minor": 1, 00:05:53.201 "patch": 0, 00:05:53.201 "suffix": "-pre", 00:05:53.201 "commit": "cabd61f7f" 00:05:53.201 } 00:05:53.201 } 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.201 05:00:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:53.201 05:00:35 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.461 request: 00:05:53.461 { 00:05:53.461 "method": "env_dpdk_get_mem_stats", 00:05:53.461 "req_id": 1 00:05:53.461 } 00:05:53.461 Got JSON-RPC error response 00:05:53.461 response: 00:05:53.461 { 00:05:53.461 "code": -32601, 00:05:53.461 "message": "Method not found" 00:05:53.461 } 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.461 05:00:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 291735 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 291735 ']' 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 291735 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291735 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291735' 00:05:53.461 killing process with pid 291735 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 291735 00:05:53.461 05:00:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 291735 00:05:54.030 00:05:54.030 real 0m1.878s 00:05:54.030 user 0m2.165s 00:05:54.030 sys 0m0.548s 00:05:54.030 05:00:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.030 05:00:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.030 ************************************ 00:05:54.030 END TEST app_cmdline 00:05:54.030 ************************************ 00:05:54.030 05:00:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:54.030 05:00:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.030 05:00:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.030 05:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.030 ************************************ 00:05:54.030 START TEST version 00:05:54.030 ************************************ 00:05:54.030 05:00:36 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:54.030 * Looking for test storage... 00:05:54.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.030 05:00:36 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.030 05:00:36 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.030 05:00:36 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.030 05:00:36 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.030 05:00:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.030 05:00:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.030 05:00:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.030 05:00:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.030 05:00:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.030 05:00:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.030 05:00:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.030 05:00:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.030 05:00:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.030 05:00:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.030 05:00:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.030 05:00:36 version -- scripts/common.sh@344 -- # case "$op" in 00:05:54.030 05:00:36 version -- scripts/common.sh@345 -- # : 1 00:05:54.030 05:00:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.031 05:00:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.031 05:00:36 version -- scripts/common.sh@365 -- # decimal 1 00:05:54.031 05:00:36 version -- scripts/common.sh@353 -- # local d=1 00:05:54.031 05:00:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.031 05:00:36 version -- scripts/common.sh@355 -- # echo 1 00:05:54.031 05:00:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.031 05:00:36 version -- scripts/common.sh@366 -- # decimal 2 00:05:54.031 05:00:36 version -- scripts/common.sh@353 -- # local d=2 00:05:54.031 05:00:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.031 05:00:36 version -- scripts/common.sh@355 -- # echo 2 00:05:54.031 05:00:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.031 05:00:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.031 05:00:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.031 05:00:36 version -- scripts/common.sh@368 -- # return 0 00:05:54.031 05:00:36 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.031 05:00:36 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.031 --rc genhtml_branch_coverage=1 00:05:54.031 --rc genhtml_function_coverage=1 00:05:54.031 --rc genhtml_legend=1 00:05:54.031 --rc geninfo_all_blocks=1 00:05:54.031 --rc geninfo_unexecuted_blocks=1 00:05:54.031 00:05:54.031 ' 00:05:54.031 05:00:36 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.031 --rc genhtml_branch_coverage=1 00:05:54.031 --rc genhtml_function_coverage=1 00:05:54.031 --rc genhtml_legend=1 00:05:54.031 --rc geninfo_all_blocks=1 00:05:54.031 --rc geninfo_unexecuted_blocks=1 00:05:54.031 00:05:54.031 ' 00:05:54.031 05:00:36 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.031 --rc genhtml_branch_coverage=1 00:05:54.031 --rc genhtml_function_coverage=1 00:05:54.031 --rc genhtml_legend=1 00:05:54.031 --rc geninfo_all_blocks=1 00:05:54.031 --rc geninfo_unexecuted_blocks=1 00:05:54.031 00:05:54.031 ' 00:05:54.031 05:00:36 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.031 --rc genhtml_branch_coverage=1 00:05:54.031 --rc genhtml_function_coverage=1 00:05:54.031 --rc genhtml_legend=1 00:05:54.031 --rc geninfo_all_blocks=1 00:05:54.031 --rc geninfo_unexecuted_blocks=1 00:05:54.031 00:05:54.031 ' 00:05:54.031 05:00:36 version -- app/version.sh@17 -- # get_header_version major 00:05:54.031 05:00:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.031 05:00:36 version -- app/version.sh@14 -- # cut -f2 00:05:54.031 05:00:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.291 05:00:36 version -- app/version.sh@17 -- # major=25 00:05:54.291 05:00:36 version -- app/version.sh@18 -- # get_header_version minor 00:05:54.291 05:00:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # cut -f2 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.291 05:00:36 version -- app/version.sh@18 -- # minor=1 00:05:54.291 05:00:36 version -- app/version.sh@19 -- # get_header_version patch 00:05:54.291 05:00:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # cut -f2 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.291 05:00:36 version -- app/version.sh@19 -- # patch=0 00:05:54.291 05:00:36 version -- app/version.sh@20 -- # get_header_version suffix 00:05:54.291 05:00:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # cut -f2 00:05:54.291 05:00:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.291 05:00:36 version -- app/version.sh@20 -- # suffix=-pre 00:05:54.291 05:00:36 version -- app/version.sh@22 -- # version=25.1 00:05:54.291 05:00:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:54.291 05:00:36 version -- app/version.sh@28 -- # version=25.1rc0 00:05:54.291 05:00:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:54.291 05:00:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:54.291 05:00:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:54.291 05:00:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:54.291 00:05:54.291 real 0m0.274s 00:05:54.291 user 0m0.151s 00:05:54.291 sys 0m0.178s 00:05:54.291 05:00:36 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.291 05:00:36 version -- common/autotest_common.sh@10 -- # set +x 00:05:54.291 ************************************ 00:05:54.291 END TEST version 00:05:54.291 ************************************ 00:05:54.291 05:00:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:54.291 05:00:36 -- spdk/autotest.sh@194 -- # uname -s 00:05:54.291 05:00:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:54.291 05:00:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.291 05:00:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.291 05:00:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:54.291 05:00:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.291 05:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.291 05:00:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:54.291 05:00:36 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:54.291 05:00:36 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.291 05:00:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.291 05:00:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.291 05:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.291 ************************************ 00:05:54.291 START TEST nvmf_tcp 00:05:54.291 ************************************ 00:05:54.291 05:00:36 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.552 * Looking for test storage... 00:05:54.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.552 05:00:36 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.552 --rc genhtml_branch_coverage=1 00:05:54.552 --rc genhtml_function_coverage=1 00:05:54.552 --rc genhtml_legend=1 00:05:54.552 --rc geninfo_all_blocks=1 00:05:54.552 --rc geninfo_unexecuted_blocks=1 00:05:54.552 00:05:54.552 ' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.552 --rc genhtml_branch_coverage=1 00:05:54.552 --rc genhtml_function_coverage=1 00:05:54.552 --rc genhtml_legend=1 00:05:54.552 --rc geninfo_all_blocks=1 00:05:54.552 --rc geninfo_unexecuted_blocks=1 00:05:54.552 00:05:54.552 ' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.552 --rc genhtml_branch_coverage=1 00:05:54.552 --rc genhtml_function_coverage=1 00:05:54.552 --rc genhtml_legend=1 00:05:54.552 --rc geninfo_all_blocks=1 00:05:54.552 --rc geninfo_unexecuted_blocks=1 00:05:54.552 00:05:54.552 ' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.552 --rc genhtml_branch_coverage=1 00:05:54.552 --rc genhtml_function_coverage=1 00:05:54.552 --rc genhtml_legend=1 00:05:54.552 --rc geninfo_all_blocks=1 00:05:54.552 --rc geninfo_unexecuted_blocks=1 00:05:54.552 00:05:54.552 ' 00:05:54.552 05:00:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:54.552 05:00:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:54.552 05:00:36 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.552 05:00:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.552 ************************************ 00:05:54.552 START TEST nvmf_target_core 00:05:54.552 ************************************ 00:05:54.553 05:00:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:54.812 * Looking for test storage... 00:05:54.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:54.812 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.813 --rc genhtml_branch_coverage=1 00:05:54.813 --rc genhtml_function_coverage=1 00:05:54.813 --rc genhtml_legend=1 00:05:54.813 --rc geninfo_all_blocks=1 00:05:54.813 --rc geninfo_unexecuted_blocks=1 00:05:54.813 00:05:54.813 ' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.813 --rc genhtml_branch_coverage=1 00:05:54.813 --rc genhtml_function_coverage=1 00:05:54.813 --rc genhtml_legend=1 00:05:54.813 --rc geninfo_all_blocks=1 00:05:54.813 --rc geninfo_unexecuted_blocks=1 00:05:54.813 00:05:54.813 ' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.813 --rc genhtml_branch_coverage=1 00:05:54.813 --rc genhtml_function_coverage=1 00:05:54.813 --rc genhtml_legend=1 00:05:54.813 --rc geninfo_all_blocks=1 00:05:54.813 --rc geninfo_unexecuted_blocks=1 00:05:54.813 00:05:54.813 ' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.813 --rc genhtml_branch_coverage=1 00:05:54.813 --rc genhtml_function_coverage=1 00:05:54.813 --rc genhtml_legend=1 00:05:54.813 --rc geninfo_all_blocks=1 00:05:54.813 --rc geninfo_unexecuted_blocks=1 00:05:54.813 00:05:54.813 ' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.813 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.814 ************************************ 00:05:54.814 START TEST nvmf_abort 00:05:54.814 ************************************ 00:05:54.814 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:55.074 * Looking for test storage... 00:05:55.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.074 --rc genhtml_branch_coverage=1 00:05:55.074 --rc genhtml_function_coverage=1 00:05:55.074 --rc genhtml_legend=1 00:05:55.074 --rc geninfo_all_blocks=1 00:05:55.074 --rc geninfo_unexecuted_blocks=1 00:05:55.074 00:05:55.074 ' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.074 --rc genhtml_branch_coverage=1 00:05:55.074 --rc genhtml_function_coverage=1 00:05:55.074 --rc genhtml_legend=1 00:05:55.074 --rc geninfo_all_blocks=1 00:05:55.074 --rc geninfo_unexecuted_blocks=1 00:05:55.074 00:05:55.074 ' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.074 --rc genhtml_branch_coverage=1 00:05:55.074 --rc genhtml_function_coverage=1 00:05:55.074 --rc genhtml_legend=1 00:05:55.074 --rc geninfo_all_blocks=1 00:05:55.074 --rc geninfo_unexecuted_blocks=1 00:05:55.074 00:05:55.074 ' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.074 --rc genhtml_branch_coverage=1 00:05:55.074 --rc genhtml_function_coverage=1 00:05:55.074 --rc genhtml_legend=1 00:05:55.074 --rc geninfo_all_blocks=1 00:05:55.074 --rc geninfo_unexecuted_blocks=1 00:05:55.074 00:05:55.074 ' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.074 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.075 05:00:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:03.203 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:03.203 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:03.203 Found net devices under 0000:af:00.0: cvl_0_0 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:03.203 Found net devices under 0000:af:00.1: cvl_0_1 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.203 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:06:03.204 00:06:03.204 --- 10.0.0.2 ping statistics --- 00:06:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.204 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:06:03.204 00:06:03.204 --- 10.0.0.1 ping statistics --- 00:06:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.204 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=295703 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 295703 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 295703 ']' 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.204 05:00:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.204 [2024-12-09 05:00:44.872265] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:06:03.204 [2024-12-09 05:00:44.872314] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.204 [2024-12-09 05:00:44.970921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.204 [2024-12-09 05:00:45.012389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.204 [2024-12-09 05:00:45.012427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.204 [2024-12-09 05:00:45.012436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.204 [2024-12-09 05:00:45.012444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.204 [2024-12-09 05:00:45.012467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.204 [2024-12-09 05:00:45.013948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.204 [2024-12-09 05:00:45.014056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.204 [2024-12-09 05:00:45.014056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 [2024-12-09 05:00:45.769832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 Malloc0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 Delay0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 [2024-12-09 05:00:45.844166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.464 05:00:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:03.723 [2024-12-09 05:00:45.991054] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:06.259 Initializing NVMe Controllers 00:06:06.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:06.259 controller IO queue size 128 less than required 00:06:06.259 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:06.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:06.259 Initialization complete. Launching workers. 00:06:06.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38023 00:06:06.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38084, failed to submit 62 00:06:06.259 success 38027, unsuccessful 57, failed 0 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.259 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.259 rmmod nvme_tcp 00:06:06.259 rmmod nvme_fabrics 00:06:06.259 rmmod nvme_keyring 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 295703 ']' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 295703 ']' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295703' 00:06:06.260 killing process with pid 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 295703 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.260 05:00:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.169 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.169 00:06:08.169 real 0m13.350s 00:06:08.169 user 0m14.228s 00:06:08.169 sys 0m6.618s 00:06:08.169 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.169 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.169 ************************************ 00:06:08.169 END TEST nvmf_abort 00:06:08.169 ************************************ 00:06:08.170 05:00:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:08.170 05:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.170 05:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.170 05:00:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.431 ************************************ 00:06:08.431 START TEST nvmf_ns_hotplug_stress 00:06:08.431 ************************************ 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:08.431 * Looking for test storage... 00:06:08.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.431 --rc genhtml_branch_coverage=1 00:06:08.431 --rc genhtml_function_coverage=1 00:06:08.431 --rc genhtml_legend=1 00:06:08.431 --rc geninfo_all_blocks=1 00:06:08.431 --rc geninfo_unexecuted_blocks=1 00:06:08.431 00:06:08.431 ' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.431 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.432 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.691 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:08.691 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:08.691 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.691 05:00:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.819 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:16.820 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:16.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:16.820 Found net devices under 0000:af:00.0: cvl_0_0 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:16.820 Found net devices under 0000:af:00.1: cvl_0_1 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.820 05:00:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:06:16.820 00:06:16.820 --- 10.0.0.2 ping statistics --- 00:06:16.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.820 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:06:16.820 00:06:16.820 --- 10.0.0.1 ping statistics --- 00:06:16.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.820 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=300155 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 300155 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 300155 ']' 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.820 05:00:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.820 [2024-12-09 05:00:58.299890] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:06:16.820 [2024-12-09 05:00:58.299935] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.820 [2024-12-09 05:00:58.397894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.820 [2024-12-09 05:00:58.439153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.820 [2024-12-09 05:00:58.439190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.820 [2024-12-09 05:00:58.439200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.820 [2024-12-09 05:00:58.439227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.820 [2024-12-09 05:00:58.439235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.820 [2024-12-09 05:00:58.440847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.820 [2024-12-09 05:00:58.440871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.820 [2024-12-09 05:00:58.440872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:16.820 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:17.078 [2024-12-09 05:00:59.361748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.078 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.337 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.337 [2024-12-09 05:00:59.755502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.337 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:17.595 05:00:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:17.854 Malloc0 00:06:17.854 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:18.113 Delay0 00:06:18.113 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.371 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:18.371 NULL1 00:06:18.371 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:18.629 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=300716 00:06:18.629 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:18.629 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:18.629 05:01:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.887 Read completed with error (sct=0, sc=11) 00:06:18.887 05:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.145 05:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:19.145 05:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:19.145 true 00:06:19.145 05:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:19.145 05:01:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.078 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.337 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:20.337 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:20.337 true 00:06:20.337 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:20.337 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.595 05:01:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.854 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:20.854 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:21.114 true 00:06:21.114 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:21.114 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.114 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.385 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.385 [2024-12-09 05:01:03.748804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.748884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.748919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.748960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.748998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.749967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.385 [2024-12-09 05:01:03.750661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.750974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.751999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.752540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.753996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.754991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.386 [2024-12-09 05:01:03.755558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.755609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.755655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.755699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.755745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.387 [2024-12-09 05:01:03.756299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.756956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.757995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.758981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.759972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.387 [2024-12-09 05:01:03.760943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.760981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.761985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.762991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.763968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.764962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.388 [2024-12-09 05:01:03.765429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.765973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.766974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.767999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.768953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.769485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.389 [2024-12-09 05:01:03.770792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.770841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.770889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.770941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.770995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.771974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.772987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.773952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.774999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.390 [2024-12-09 05:01:03.775258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.775997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.776984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.777997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:21.391 [2024-12-09 05:01:03.778294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:21.391 [2024-12-09 05:01:03.778684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.778979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.779522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.391 [2024-12-09 05:01:03.780546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.780975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.781962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.782994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.783979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.784996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.785037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.785080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.785127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.392 [2024-12-09 05:01:03.785169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.785977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.786972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.787992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.788973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.789814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.393 [2024-12-09 05:01:03.790501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.790786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.791970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.792960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.793993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.394 [2024-12-09 05:01:03.794957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.795973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.796978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.797997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.798962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.799997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.800035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.395 [2024-12-09 05:01:03.800078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.800969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.801578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.802970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.803965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.804956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.396 [2024-12-09 05:01:03.805379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.805993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.806992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.807855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.808742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.808795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.808843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.397 [2024-12-09 05:01:03.808889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.808938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.808989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.809970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.397 [2024-12-09 05:01:03.810711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.810758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.810803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.810855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.810904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.810968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.811997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.812986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.813979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.814678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.398 [2024-12-09 05:01:03.815985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.816997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.817986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.818998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.819960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.399 [2024-12-09 05:01:03.820557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.820991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.821984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.822955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.823979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.824945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.825744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.825790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.825831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.825873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.400 [2024-12-09 05:01:03.825918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.825959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.825999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.826981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.827975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.828968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.829994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.401 [2024-12-09 05:01:03.830673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.830993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.831663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.832958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.833998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.834961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.835984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.402 [2024-12-09 05:01:03.836032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.836960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.837979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.838366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.839973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.403 [2024-12-09 05:01:03.840847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.840887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.840931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.840969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.841994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.404 [2024-12-09 05:01:03.842816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.842866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.842922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.842971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.843956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.696 [2024-12-09 05:01:03.844399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.844999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.845999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.846984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.697 [2024-12-09 05:01:03.847649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.847983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.848986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.849279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.850959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.698 [2024-12-09 05:01:03.851348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.851991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.852882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.853813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.699 [2024-12-09 05:01:03.854964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.855974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.856964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.857969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.700 [2024-12-09 05:01:03.858619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.858975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.859952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.860979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.701 [2024-12-09 05:01:03.861019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.701 [2024-12-09 05:01:03.861934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.861986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.862959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.863547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.864993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.702 [2024-12-09 05:01:03.865291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.865970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.866882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.867977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.703 [2024-12-09 05:01:03.868882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.868931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.868980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.869972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.870958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.871994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.872043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.872089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.872134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.704 [2024-12-09 05:01:03.872176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.872963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.873561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.874970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.705 [2024-12-09 05:01:03.875808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.875846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.875895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.875938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.875982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.876827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.877991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.878971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.706 [2024-12-09 05:01:03.879685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.879977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.880985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.881998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.707 [2024-12-09 05:01:03.882932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.882978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.883968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.884985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.885976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.708 [2024-12-09 05:01:03.886615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.886872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.887987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.888984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.889976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.890973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.709 [2024-12-09 05:01:03.891523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.891968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.892959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.893639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.894962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.895955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.710 [2024-12-09 05:01:03.896768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.896819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.896867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.896917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.896966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.897976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.898967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.899958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.900409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.901969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.711 [2024-12-09 05:01:03.902221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.902987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.903964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.904986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.905959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.906983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.712 [2024-12-09 05:01:03.907273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.907980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.908975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.909990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.910995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.911351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.912111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.912164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.713 [2024-12-09 05:01:03.912221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.912985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.913983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.914947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.714 [2024-12-09 05:01:03.915189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.915901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.916989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.917031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.917071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.714 [2024-12-09 05:01:03.917120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.917977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.918996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.919993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.920961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.921404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.715 [2024-12-09 05:01:03.922461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.922970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.923995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.924981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.925987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.716 [2024-12-09 05:01:03.926991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.927966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.928961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.929977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.930950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.931971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.717 [2024-12-09 05:01:03.932412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.932453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.932501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.932544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.933997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.934999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.935873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.718 [2024-12-09 05:01:03.936912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.936958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.937959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.938968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.939973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.940972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.941970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.942011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.942055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.942105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.942148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.719 [2024-12-09 05:01:03.942198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.942971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.943463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.944999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.945962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.946842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.720 [2024-12-09 05:01:03.947397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.947961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.948970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.949866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.950970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.951971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.721 [2024-12-09 05:01:03.952511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.952996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.953964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.954999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.955971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.956560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.722 [2024-12-09 05:01:03.957540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.957965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.958985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.959946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.960983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.723 [2024-12-09 05:01:03.961732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.961776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.961819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.961867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.961914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.961958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.962957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.963024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.963071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.963119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.963166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.963952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 true 00:06:21.724 [2024-12-09 05:01:03.964911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.964962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.965994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.724 [2024-12-09 05:01:03.966787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.966822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.966868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.725 [2024-12-09 05:01:03.967370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.967987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.968969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.969940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.970999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.971957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.972010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.972065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.972113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.972162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.725 [2024-12-09 05:01:03.972206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.972966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.973973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.974980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.975962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.726 [2024-12-09 05:01:03.976389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.976667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.977960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.978980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.979996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.980979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.727 [2024-12-09 05:01:03.981383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.981992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.982972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.983356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.984976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.985996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.728 [2024-12-09 05:01:03.986789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.986838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.986885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.986933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.987913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.988989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:21.729 [2024-12-09 05:01:03.989331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 05:01:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.729 [2024-12-09 05:01:03.989739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.989988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.990966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.729 [2024-12-09 05:01:03.991672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.991971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.992973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.993371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.994999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.995965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.996974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.730 [2024-12-09 05:01:03.997542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.997987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.998999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:03.999945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.000949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.001965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.731 [2024-12-09 05:01:04.002335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.002964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.003345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.004982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.005990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.006942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.007739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.732 [2024-12-09 05:01:04.008761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.008810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.008859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.008909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.008967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.009981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.010975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.011986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.012992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.013470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.014999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.015049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.733 [2024-12-09 05:01:04.015094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.015958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.016978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.017947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.734 [2024-12-09 05:01:04.018476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.018967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.019974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.734 [2024-12-09 05:01:04.020914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.020958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.021972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.022981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.023988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.024979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.735 [2024-12-09 05:01:04.025845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.025889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.025935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.025977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.026993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.027890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.028785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.029998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.030965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.736 [2024-12-09 05:01:04.031676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.031965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.032987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.033986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.034991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.035970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.036966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.737 [2024-12-09 05:01:04.037769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.037814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.037856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.037897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.037939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.037980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.038952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.039970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.040981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.041987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.042577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.738 [2024-12-09 05:01:04.043825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.043866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.043907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.043954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.043995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.044971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.045988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.046977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.047969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.048996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.739 [2024-12-09 05:01:04.049318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.049991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.050964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.051998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.052513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.053962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.054952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.740 [2024-12-09 05:01:04.055524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.055956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.056965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.057974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.058969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.059014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.059063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.059849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.059906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.059950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.060957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.061974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.741 [2024-12-09 05:01:04.062017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.062979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.063621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.064977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.065995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.742 [2024-12-09 05:01:04.066490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.066942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.067979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.068966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.743 [2024-12-09 05:01:04.069800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.069991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.743 [2024-12-09 05:01:04.070939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.070976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.071961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.072961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.073981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.074956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.744 [2024-12-09 05:01:04.075366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.075406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.075444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.075938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.075987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.076970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.077976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.078673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.079970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.745 [2024-12-09 05:01:04.080610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.080976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.081966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.082972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.083985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.084966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.746 [2024-12-09 05:01:04.085486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.085953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.086993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.087989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.088567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.089968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.747 [2024-12-09 05:01:04.090719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.090960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.091954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.092971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.093986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.094953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.095000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.095043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.095097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.095151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.748 [2024-12-09 05:01:04.095199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.095974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.096958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.097997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.098962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.099990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.749 [2024-12-09 05:01:04.100494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.100976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.101965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.102998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.103773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.104965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.750 [2024-12-09 05:01:04.105443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.105987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.106969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.107968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.108980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.109961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.110013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.110062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.110110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.751 [2024-12-09 05:01:04.110159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.110206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.110965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.111988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.112990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.113840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.114750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.115218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.115267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.752 [2024-12-09 05:01:04.115307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.115997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.116983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.117964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.118957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.753 [2024-12-09 05:01:04.119934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.119994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.120974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.121020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.121058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.121101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.121152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.121193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:21.754 [2024-12-09 05:01:04.122032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.122956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.123989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.124926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.125122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.125172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.125225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.754 [2024-12-09 05:01:04.125275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.125906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.126981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.127964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.128996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.129995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.755 [2024-12-09 05:01:04.130332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.130982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.131972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.132259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:21.756 [2024-12-09 05:01:04.133469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.133989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.134997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.045 [2024-12-09 05:01:04.135800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.135849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.135897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.135946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.136861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.137973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.138967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.139982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.046 [2024-12-09 05:01:04.140904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.140948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.140994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.141962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.142950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.143964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.144986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.047 [2024-12-09 05:01:04.145330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.145986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.146718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.147985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.148981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.149968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.048 [2024-12-09 05:01:04.150267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.150969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.151970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.152968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.153423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.154973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.049 [2024-12-09 05:01:04.155342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.155974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.156989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.157979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.158984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.050 [2024-12-09 05:01:04.159543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.159972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.160984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.161953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.162968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.051 [2024-12-09 05:01:04.163812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.163855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.163898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.163939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.163980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.164461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.165996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.166997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.167968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.052 [2024-12-09 05:01:04.168840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.168886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.168935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.168980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.169986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.170987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.171994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.172991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.053 [2024-12-09 05:01:04.173437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.173990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.174537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:22.054 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.054 [2024-12-09 05:01:04.399264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.399980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.400973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.054 [2024-12-09 05:01:04.401900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.401948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.401995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.402987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.403995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.404954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.055 [2024-12-09 05:01:04.405913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.405957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.405991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.406988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.407969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.408272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.409981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.056 [2024-12-09 05:01:04.410637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.410985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.411997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.412692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.413993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.414985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.415033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.415082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.057 [2024-12-09 05:01:04.415131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.415986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.416986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.417995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.418981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.419980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.420022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.058 [2024-12-09 05:01:04.420065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.420964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.421981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.422962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.423988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.059 [2024-12-09 05:01:04.424279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.424722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.425982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.426882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.427994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.428985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.429021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.060 [2024-12-09 05:01:04.429056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.429967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.430999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.431965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:22.061 [2024-12-09 05:01:04.432740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.432996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:22.061 [2024-12-09 05:01:04.433121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.061 [2024-12-09 05:01:04.433509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.433992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.434981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.435994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.436990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.437985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.062 [2024-12-09 05:01:04.438274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.438988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.439967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.440973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.441962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.442954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.443001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.443049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.443095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.063 [2024-12-09 05:01:04.443141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.443998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.444993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.445974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.446713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:22.064 [2024-12-09 05:01:04.447213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.447264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.447309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.447357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.064 [2024-12-09 05:01:04.447405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.447989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.448965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.449947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.450985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.451982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.452033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.452090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.065 [2024-12-09 05:01:04.452143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.452992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.453993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.454996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.455996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.066 [2024-12-09 05:01:04.456883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.456933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.456983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.457847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.458972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.459988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.460998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.067 [2024-12-09 05:01:04.461723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.461769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.461819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.461867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.461922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.461975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.462994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.463941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.464983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.465987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.068 [2024-12-09 05:01:04.466581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.466998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.467969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.468828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.469985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.069 [2024-12-09 05:01:04.470400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.470993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.471810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.472976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.473973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.474986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.070 [2024-12-09 05:01:04.475429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.475946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.476957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.477997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.478981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.479973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.071 [2024-12-09 05:01:04.480243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.480970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.481990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.482723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.483968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.072 [2024-12-09 05:01:04.484976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.073 [2024-12-09 05:01:04.485602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.485983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.486965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.487973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.338 [2024-12-09 05:01:04.488239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.488953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.489985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.490994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.491993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.492722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.339 [2024-12-09 05:01:04.493820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.493868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.493902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.493947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.493988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.494956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.495991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.496973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.497964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.340 [2024-12-09 05:01:04.498403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.498978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.499434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:22.341 [2024-12-09 05:01:04.500193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.500961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.501983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.502954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.341 [2024-12-09 05:01:04.503774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.503819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.503863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.503913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.503967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.504982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.505989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.506993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.507961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.508001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.342 [2024-12-09 05:01:04.508041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.508956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.509948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.510958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.511958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.512623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.513389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.513445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.343 [2024-12-09 05:01:04.513498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.513979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.514958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.515956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.516973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.517996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.344 [2024-12-09 05:01:04.518378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.518995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.519967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.520964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.521995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.522578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.345 [2024-12-09 05:01:04.523593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.523975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.524989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.525963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.526994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.527796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.346 [2024-12-09 05:01:04.528834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.528869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.528911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.528954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.528996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.529973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.530985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.531964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.347 [2024-12-09 05:01:04.532263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.532982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.533993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.534385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.535963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.348 [2024-12-09 05:01:04.536003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.536978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.537986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.538033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.538078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 [2024-12-09 05:01:04.538273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:22.349 true 00:06:22.349 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:22.349 05:01:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.283 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.541 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:23.541 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:23.541 true 00:06:23.541 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:23.541 05:01:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.799 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.058 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:24.058 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:24.318 true 00:06:24.318 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:24.318 05:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.258 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.516 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:25.516 05:01:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:25.775 true 00:06:25.775 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:25.775 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.033 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.033 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:26.033 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:26.291 true 00:06:26.291 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:26.291 05:01:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 05:01:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.670 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:27.670 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:27.929 true 00:06:27.929 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:27.929 05:01:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.868 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.868 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:28.868 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:29.128 true 00:06:29.128 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:29.128 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.387 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.387 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:29.387 05:01:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:29.646 true 00:06:29.646 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:29.646 05:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.024 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.024 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.024 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:31.024 true 00:06:31.283 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:31.283 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.283 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.541 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:31.541 05:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:31.801 true 00:06:31.801 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:31.801 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.060 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.060 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:32.060 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:32.319 true 00:06:32.319 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:32.319 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.582 05:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.857 05:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:32.858 05:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:32.858 true 00:06:32.858 05:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:32.858 05:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.943 05:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.223 05:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:34.223 05:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:34.491 true 00:06:34.491 05:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:34.491 05:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.469 05:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.469 05:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:35.469 05:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:35.780 true 00:06:35.780 05:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:35.780 05:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.780 05:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.052 05:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:36.052 05:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:36.317 true 00:06:36.317 05:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:36.317 05:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.279 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.554 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:37.554 05:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:37.824 true 00:06:37.824 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:37.824 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.766 05:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.766 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:38.766 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:39.025 true 00:06:39.025 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:39.025 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.284 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.284 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:39.284 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:39.557 true 00:06:39.557 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:39.557 05:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.940 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:40.940 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:41.199 true 00:06:41.199 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:41.200 05:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.141 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.141 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:42.141 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:42.401 true 00:06:42.401 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:42.401 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.661 05:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.661 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:42.661 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:42.921 true 00:06:42.921 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:42.921 05:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.304 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.304 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:44.304 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:44.565 true 00:06:44.565 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:44.565 05:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.508 05:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.508 05:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:45.508 05:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:45.769 true 00:06:45.769 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:45.769 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.032 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.032 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:46.032 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:46.292 true 00:06:46.292 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:46.292 05:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 05:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.673 05:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:47.673 05:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:47.933 true 00:06:47.933 05:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:47.933 05:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.894 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.894 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:48.894 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:48.894 Initializing NVMe Controllers 00:06:48.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.894 Controller IO queue size 128, less than required. 00:06:48.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.894 Controller IO queue size 128, less than required. 00:06:48.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:48.894 Initialization complete. Launching workers. 00:06:48.894 ======================================================== 00:06:48.894 Latency(us) 00:06:48.894 Device Information : IOPS MiB/s Average min max 00:06:48.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2720.92 1.33 30724.51 1530.31 1012555.84 00:06:48.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16938.06 8.27 7557.30 1545.64 435276.47 00:06:48.894 ======================================================== 00:06:48.894 Total : 19658.99 9.60 10763.79 1530.31 1012555.84 00:06:48.894 00:06:49.153 true 00:06:49.154 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 300716 00:06:49.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (300716) - No such process 00:06:49.154 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 300716 00:06:49.154 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.413 05:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:49.674 null0 00:06:49.674 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.674 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.674 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:49.944 null1 00:06:49.944 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.944 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.944 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:49.944 null2 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:50.204 null3 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.204 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:50.462 null4 00:06:50.463 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.463 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.463 05:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:50.723 null5 00:06:50.723 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.723 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.723 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:50.984 null6 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:50.984 null7 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.984 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.985 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 306442 306443 306445 306448 306449 306452 306454 306455 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.245 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.504 05:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.764 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.023 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.024 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.284 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.285 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.545 05:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.811 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.071 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.072 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.331 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.591 05:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.852 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.113 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.114 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.114 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.114 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.374 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.375 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.635 05:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.896 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.156 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.156 rmmod nvme_tcp 00:06:55.156 rmmod nvme_fabrics 00:06:55.156 rmmod nvme_keyring 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 300155 ']' 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 300155 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 300155 ']' 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 300155 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 300155 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 300155' 00:06:55.415 killing process with pid 300155 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 300155 00:06:55.415 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 300155 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.674 05:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.580 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.580 00:06:57.580 real 0m49.362s 00:06:57.580 user 3m11.587s 00:06:57.580 sys 0m20.087s 00:06:57.580 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.580 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.580 ************************************ 00:06:57.580 END TEST nvmf_ns_hotplug_stress 00:06:57.580 ************************************ 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.840 ************************************ 00:06:57.840 START TEST nvmf_delete_subsystem 00:06:57.840 ************************************ 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.840 * Looking for test storage... 00:06:57.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.840 --rc genhtml_branch_coverage=1 00:06:57.840 --rc genhtml_function_coverage=1 00:06:57.840 --rc genhtml_legend=1 00:06:57.840 --rc geninfo_all_blocks=1 00:06:57.840 --rc geninfo_unexecuted_blocks=1 00:06:57.840 00:06:57.840 ' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.840 --rc genhtml_branch_coverage=1 00:06:57.840 --rc genhtml_function_coverage=1 00:06:57.840 --rc genhtml_legend=1 00:06:57.840 --rc geninfo_all_blocks=1 00:06:57.840 --rc geninfo_unexecuted_blocks=1 00:06:57.840 00:06:57.840 ' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.840 --rc genhtml_branch_coverage=1 00:06:57.840 --rc genhtml_function_coverage=1 00:06:57.840 --rc genhtml_legend=1 00:06:57.840 --rc geninfo_all_blocks=1 00:06:57.840 --rc geninfo_unexecuted_blocks=1 00:06:57.840 00:06:57.840 ' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.840 --rc genhtml_branch_coverage=1 00:06:57.840 --rc genhtml_function_coverage=1 00:06:57.840 --rc genhtml_legend=1 00:06:57.840 --rc geninfo_all_blocks=1 00:06:57.840 --rc geninfo_unexecuted_blocks=1 00:06:57.840 00:06:57.840 ' 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.840 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.100 05:01:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.226 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.227 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.227 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:07:06.227 00:07:06.227 --- 10.0.0.2 ping statistics --- 00:07:06.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.227 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:07:06.227 00:07:06.227 --- 10.0.0.1 ping statistics --- 00:07:06.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.227 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.227 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=311118 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 311118 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 311118 ']' 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.228 05:01:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 [2024-12-09 05:01:47.725294] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:07:06.228 [2024-12-09 05:01:47.725342] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.228 [2024-12-09 05:01:47.824206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.228 [2024-12-09 05:01:47.862297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.228 [2024-12-09 05:01:47.862329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.228 [2024-12-09 05:01:47.862342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.228 [2024-12-09 05:01:47.862365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.228 [2024-12-09 05:01:47.862373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.228 [2024-12-09 05:01:47.863604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.228 [2024-12-09 05:01:47.863604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 [2024-12-09 05:01:48.594645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 [2024-12-09 05:01:48.614848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 NULL1 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 Delay0 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=311390 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:06.228 05:01:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.487 [2024-12-09 05:01:48.725914] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:08.397 05:01:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.397 05:01:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.397 05:01:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 [2024-12-09 05:01:50.840507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e900 is same with the state(6) to be set 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 [2024-12-09 05:01:50.840872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d410 is same with the state(6) to be set 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 Write completed with error (sct=0, sc=8) 00:07:08.397 starting I/O failed: -6 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.397 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 starting I/O failed: -6 00:07:08.398 [2024-12-09 05:01:50.845681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08dc000c40 is same with the state(6) to be set 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Write completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:08.398 Read completed with error (sct=0, sc=8) 00:07:09.781 [2024-12-09 05:01:51.820138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e720 is same with the state(6) to be set 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 [2024-12-09 05:01:51.843652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100eae0 is same with the state(6) to be set 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 [2024-12-09 05:01:51.844102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d740 is same with the state(6) to be set 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 Read completed with error (sct=0, sc=8) 00:07:09.781 [2024-12-09 05:01:51.847196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08dc00d680 is same with the state(6) to be set 00:07:09.781 Write completed with error (sct=0, sc=8) 00:07:09.782 Write completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Write completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Write completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Write completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Read completed with error (sct=0, sc=8) 00:07:09.782 Write completed with error (sct=0, sc=8) 00:07:09.782 [2024-12-09 05:01:51.847803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f08dc00d020 is same with the state(6) to be set 00:07:09.782 Initializing NVMe Controllers 00:07:09.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.782 Controller IO queue size 128, less than required. 00:07:09.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.782 Initialization complete. Launching workers. 00:07:09.782 ======================================================== 00:07:09.782 Latency(us) 00:07:09.782 Device Information : IOPS MiB/s Average min max 00:07:09.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.35 0.08 905735.68 382.30 1005669.29 00:07:09.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.89 0.08 930414.10 261.07 1009637.40 00:07:09.782 ======================================================== 00:07:09.782 Total : 319.24 0.16 917709.14 261.07 1009637.40 00:07:09.782 00:07:09.782 [2024-12-09 05:01:51.848430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e720 (9): Bad file descriptor 00:07:09.782 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.782 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:09.782 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 311390 00:07:09.782 05:01:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:09.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:10.042 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:10.042 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 311390 00:07:10.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (311390) - No such process 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 311390 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 311390 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 311390 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-12-09 05:01:52.378714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=311945 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:10.043 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.043 [2024-12-09 05:01:52.477837] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:10.639 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.639 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:10.639 05:01:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.210 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.210 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:11.210 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:11.470 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.470 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:11.470 05:01:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.057 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.057 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:12.057 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.626 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.627 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:12.627 05:01:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.196 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.196 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:13.196 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.196 Initializing NVMe Controllers 00:07:13.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:13.196 Controller IO queue size 128, less than required. 00:07:13.196 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:13.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:13.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:13.196 Initialization complete. Launching workers. 00:07:13.196 ======================================================== 00:07:13.196 Latency(us) 00:07:13.196 Device Information : IOPS MiB/s Average min max 00:07:13.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002131.92 1000143.67 1044170.78 00:07:13.196 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003476.00 1000137.90 1009418.10 00:07:13.196 ======================================================== 00:07:13.196 Total : 256.00 0.12 1002803.96 1000137.90 1044170.78 00:07:13.196 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 311945 00:07:13.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (311945) - No such process 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 311945 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.764 rmmod nvme_tcp 00:07:13.764 rmmod nvme_fabrics 00:07:13.764 rmmod nvme_keyring 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 311118 ']' 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 311118 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 311118 ']' 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 311118 00:07:13.764 05:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 311118 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 311118' 00:07:13.764 killing process with pid 311118 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 311118 00:07:13.764 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 311118 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.024 05:01:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.929 00:07:15.929 real 0m18.241s 00:07:15.929 user 0m30.478s 00:07:15.929 sys 0m7.194s 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.929 ************************************ 00:07:15.929 END TEST nvmf_delete_subsystem 00:07:15.929 ************************************ 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.929 05:01:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.188 ************************************ 00:07:16.188 START TEST nvmf_host_management 00:07:16.188 ************************************ 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:16.188 * Looking for test storage... 00:07:16.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.188 --rc genhtml_branch_coverage=1 00:07:16.188 --rc genhtml_function_coverage=1 00:07:16.188 --rc genhtml_legend=1 00:07:16.188 --rc geninfo_all_blocks=1 00:07:16.188 --rc geninfo_unexecuted_blocks=1 00:07:16.188 00:07:16.188 ' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.188 --rc genhtml_branch_coverage=1 00:07:16.188 --rc genhtml_function_coverage=1 00:07:16.188 --rc genhtml_legend=1 00:07:16.188 --rc geninfo_all_blocks=1 00:07:16.188 --rc geninfo_unexecuted_blocks=1 00:07:16.188 00:07:16.188 ' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.188 --rc genhtml_branch_coverage=1 00:07:16.188 --rc genhtml_function_coverage=1 00:07:16.188 --rc genhtml_legend=1 00:07:16.188 --rc geninfo_all_blocks=1 00:07:16.188 --rc geninfo_unexecuted_blocks=1 00:07:16.188 00:07:16.188 ' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.188 --rc genhtml_branch_coverage=1 00:07:16.188 --rc genhtml_function_coverage=1 00:07:16.188 --rc genhtml_legend=1 00:07:16.188 --rc geninfo_all_blocks=1 00:07:16.188 --rc geninfo_unexecuted_blocks=1 00:07:16.188 00:07:16.188 ' 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.188 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.448 05:01:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:24.581 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:24.581 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:24.581 Found net devices under 0000:af:00.0: cvl_0_0 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:24.581 Found net devices under 0000:af:00.1: cvl_0_1 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.581 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.582 05:02:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:07:24.582 00:07:24.582 --- 10.0.0.2 ping statistics --- 00:07:24.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.582 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:07:24.582 00:07:24.582 --- 10.0.0.1 ping statistics --- 00:07:24.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.582 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=316453 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 316453 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 316453 ']' 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 [2024-12-09 05:02:06.149027] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:07:24.582 [2024-12-09 05:02:06.149081] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.582 [2024-12-09 05:02:06.246171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.582 [2024-12-09 05:02:06.289040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.582 [2024-12-09 05:02:06.289078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.582 [2024-12-09 05:02:06.289089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.582 [2024-12-09 05:02:06.289097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.582 [2024-12-09 05:02:06.289104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.582 [2024-12-09 05:02:06.290919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.582 [2024-12-09 05:02:06.291027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.582 [2024-12-09 05:02:06.291136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.582 [2024-12-09 05:02:06.291137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.582 05:02:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 [2024-12-09 05:02:07.031512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.842 Malloc0 00:07:24.842 [2024-12-09 05:02:07.116023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=316640 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 316640 /var/tmp/bdevperf.sock 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 316640 ']' 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:24.842 { 00:07:24.842 "params": { 00:07:24.842 "name": "Nvme$subsystem", 00:07:24.842 "trtype": "$TEST_TRANSPORT", 00:07:24.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.842 "adrfam": "ipv4", 00:07:24.842 "trsvcid": "$NVMF_PORT", 00:07:24.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.842 "hdgst": ${hdgst:-false}, 00:07:24.842 "ddgst": ${ddgst:-false} 00:07:24.842 }, 00:07:24.842 "method": "bdev_nvme_attach_controller" 00:07:24.842 } 00:07:24.842 EOF 00:07:24.842 )") 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:24.842 05:02:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:24.842 "params": { 00:07:24.842 "name": "Nvme0", 00:07:24.842 "trtype": "tcp", 00:07:24.842 "traddr": "10.0.0.2", 00:07:24.842 "adrfam": "ipv4", 00:07:24.842 "trsvcid": "4420", 00:07:24.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.842 "hdgst": false, 00:07:24.842 "ddgst": false 00:07:24.842 }, 00:07:24.842 "method": "bdev_nvme_attach_controller" 00:07:24.842 }' 00:07:24.842 [2024-12-09 05:02:07.225265] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:07:24.842 [2024-12-09 05:02:07.225317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316640 ] 00:07:25.101 [2024-12-09 05:02:07.319333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.101 [2024-12-09 05:02:07.359759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.101 Running I/O for 10 seconds... 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1166 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1166 -ge 100 ']' 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.669 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.930 [2024-12-09 05:02:08.139272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae1b0 is same with the state(6) to be set 00:07:25.930 [2024-12-09 05:02:08.139321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae1b0 is same with the state(6) to be set 00:07:25.930 [2024-12-09 05:02:08.139331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfae1b0 is same with the state(6) to be set 00:07:25.930 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.930 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:25.930 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.930 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.930 [2024-12-09 05:02:08.145411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.930 [2024-12-09 05:02:08.145445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.930 [2024-12-09 05:02:08.145457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.930 [2024-12-09 05:02:08.145468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.930 [2024-12-09 05:02:08.145478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.930 [2024-12-09 05:02:08.145488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.930 [2024-12-09 05:02:08.145498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:25.930 [2024-12-09 05:02:08.145511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.930 [2024-12-09 05:02:08.145521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97cad0 is same with the state(6) to be set 00:07:25.930 [2024-12-09 05:02:08.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.930 [2024-12-09 05:02:08.145586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.145990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.145999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.931 [2024-12-09 05:02:08.146195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.931 [2024-12-09 05:02:08.146206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.932 [2024-12-09 05:02:08.146774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.932 [2024-12-09 05:02:08.146784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.933 [2024-12-09 05:02:08.146794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.933 [2024-12-09 05:02:08.146805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.933 [2024-12-09 05:02:08.146815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.933 [2024-12-09 05:02:08.146824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.933 [2024-12-09 05:02:08.146834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:25.933 [2024-12-09 05:02:08.146843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:25.933 [2024-12-09 05:02:08.147750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:25.933 task offset: 32768 on job bdev=Nvme0n1 fails 00:07:25.933 00:07:25.933 Latency(us) 00:07:25.933 [2024-12-09T04:02:08.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.933 Job: Nvme0n1 ended in about 0.62 seconds with error 00:07:25.933 Verification LBA range: start 0x0 length 0x400 00:07:25.933 Nvme0n1 : 0.62 2054.44 128.40 102.72 0.00 29084.70 1952.97 26109.54 00:07:25.933 [2024-12-09T04:02:08.403Z] =================================================================================================================== 00:07:25.933 [2024-12-09T04:02:08.403Z] Total : 2054.44 128.40 102.72 0.00 29084.70 1952.97 26109.54 00:07:25.933 [2024-12-09 05:02:08.150037] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.933 [2024-12-09 05:02:08.150058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97cad0 (9): Bad file descriptor 00:07:25.933 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.933 05:02:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:25.933 [2024-12-09 05:02:08.202381] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 316640 00:07:26.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (316640) - No such process 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.873 { 00:07:26.873 "params": { 00:07:26.873 "name": "Nvme$subsystem", 00:07:26.873 "trtype": "$TEST_TRANSPORT", 00:07:26.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.873 "adrfam": "ipv4", 00:07:26.873 "trsvcid": "$NVMF_PORT", 00:07:26.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.873 "hdgst": ${hdgst:-false}, 00:07:26.873 "ddgst": ${ddgst:-false} 00:07:26.873 }, 00:07:26.873 "method": "bdev_nvme_attach_controller" 00:07:26.873 } 00:07:26.873 EOF 00:07:26.873 )") 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:26.873 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:26.874 05:02:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.874 "params": { 00:07:26.874 "name": "Nvme0", 00:07:26.874 "trtype": "tcp", 00:07:26.874 "traddr": "10.0.0.2", 00:07:26.874 "adrfam": "ipv4", 00:07:26.874 "trsvcid": "4420", 00:07:26.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.874 "hdgst": false, 00:07:26.874 "ddgst": false 00:07:26.874 }, 00:07:26.874 "method": "bdev_nvme_attach_controller" 00:07:26.874 }' 00:07:26.874 [2024-12-09 05:02:09.212043] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:07:26.874 [2024-12-09 05:02:09.212094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317044 ] 00:07:26.874 [2024-12-09 05:02:09.306864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.134 [2024-12-09 05:02:09.344307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.134 Running I/O for 1 seconds... 00:07:28.515 2011.00 IOPS, 125.69 MiB/s 00:07:28.515 Latency(us) 00:07:28.515 [2024-12-09T04:02:10.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.515 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:28.515 Verification LBA range: start 0x0 length 0x400 00:07:28.515 Nvme0n1 : 1.01 2044.73 127.80 0.00 0.00 30581.25 4089.45 26004.68 00:07:28.515 [2024-12-09T04:02:10.985Z] =================================================================================================================== 00:07:28.515 [2024-12-09T04:02:10.985Z] Total : 2044.73 127.80 0.00 0.00 30581.25 4089.45 26004.68 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.515 rmmod nvme_tcp 00:07:28.515 rmmod nvme_fabrics 00:07:28.515 rmmod nvme_keyring 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 316453 ']' 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 316453 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 316453 ']' 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 316453 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316453 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316453' 00:07:28.515 killing process with pid 316453 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 316453 00:07:28.515 05:02:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 316453 00:07:28.775 [2024-12-09 05:02:11.101459] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.775 05:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:31.318 00:07:31.318 real 0m14.779s 00:07:31.318 user 0m23.755s 00:07:31.318 sys 0m7.135s 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.318 ************************************ 00:07:31.318 END TEST nvmf_host_management 00:07:31.318 ************************************ 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.318 ************************************ 00:07:31.318 START TEST nvmf_lvol 00:07:31.318 ************************************ 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:31.318 * Looking for test storage... 00:07:31.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.318 --rc genhtml_branch_coverage=1 00:07:31.318 --rc genhtml_function_coverage=1 00:07:31.318 --rc genhtml_legend=1 00:07:31.318 --rc geninfo_all_blocks=1 00:07:31.318 --rc geninfo_unexecuted_blocks=1 00:07:31.318 00:07:31.318 ' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.318 --rc genhtml_branch_coverage=1 00:07:31.318 --rc genhtml_function_coverage=1 00:07:31.318 --rc genhtml_legend=1 00:07:31.318 --rc geninfo_all_blocks=1 00:07:31.318 --rc geninfo_unexecuted_blocks=1 00:07:31.318 00:07:31.318 ' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.318 --rc genhtml_branch_coverage=1 00:07:31.318 --rc genhtml_function_coverage=1 00:07:31.318 --rc genhtml_legend=1 00:07:31.318 --rc geninfo_all_blocks=1 00:07:31.318 --rc geninfo_unexecuted_blocks=1 00:07:31.318 00:07:31.318 ' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.318 --rc genhtml_branch_coverage=1 00:07:31.318 --rc genhtml_function_coverage=1 00:07:31.318 --rc genhtml_legend=1 00:07:31.318 --rc geninfo_all_blocks=1 00:07:31.318 --rc geninfo_unexecuted_blocks=1 00:07:31.318 00:07:31.318 ' 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.318 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.319 05:02:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:39.508 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:39.508 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:39.508 Found net devices under 0000:af:00.0: cvl_0_0 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:39.508 Found net devices under 0000:af:00.1: cvl_0_1 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.508 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:07:39.509 00:07:39.509 --- 10.0.0.2 ping statistics --- 00:07:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.509 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:39.509 00:07:39.509 --- 10.0.0.1 ping statistics --- 00:07:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.509 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=321033 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 321033 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 321033 ']' 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.509 05:02:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.509 [2024-12-09 05:02:20.916064] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:07:39.509 [2024-12-09 05:02:20.916116] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.509 [2024-12-09 05:02:21.013749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.509 [2024-12-09 05:02:21.054735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.509 [2024-12-09 05:02:21.054774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.509 [2024-12-09 05:02:21.054784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.509 [2024-12-09 05:02:21.054793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.509 [2024-12-09 05:02:21.054800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.509 [2024-12-09 05:02:21.056364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.509 [2024-12-09 05:02:21.056473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.509 [2024-12-09 05:02:21.056474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.509 05:02:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.509 [2024-12-09 05:02:21.973123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.769 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.769 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:39.769 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:40.028 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:40.028 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:40.287 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:40.546 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a2b07293-4902-4c13-ad07-c9e25fa80c73 00:07:40.546 05:02:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2b07293-4902-4c13-ad07-c9e25fa80c73 lvol 20 00:07:40.805 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6e5e9c7e-d255-47b8-b76c-e81dc8d71a9d 00:07:40.805 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.805 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e5e9c7e-d255-47b8-b76c-e81dc8d71a9d 00:07:41.063 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.323 [2024-12-09 05:02:23.621371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.323 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.582 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=321601 00:07:41.582 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:41.582 05:02:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:42.547 05:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6e5e9c7e-d255-47b8-b76c-e81dc8d71a9d MY_SNAPSHOT 00:07:42.805 05:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d24bffe9-7517-4ddf-b3e9-13e37ec6ff7f 00:07:42.805 05:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6e5e9c7e-d255-47b8-b76c-e81dc8d71a9d 30 00:07:43.065 05:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d24bffe9-7517-4ddf-b3e9-13e37ec6ff7f MY_CLONE 00:07:43.324 05:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3ca14aef-2d8c-456f-8fcd-ec2bef8ab231 00:07:43.324 05:02:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3ca14aef-2d8c-456f-8fcd-ec2bef8ab231 00:07:43.893 05:02:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 321601 00:07:52.035 Initializing NVMe Controllers 00:07:52.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:52.035 Controller IO queue size 128, less than required. 00:07:52.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:52.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:52.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:52.035 Initialization complete. Launching workers. 00:07:52.035 ======================================================== 00:07:52.035 Latency(us) 00:07:52.035 Device Information : IOPS MiB/s Average min max 00:07:52.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12628.30 49.33 10136.06 1497.18 63392.78 00:07:52.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12459.60 48.67 10273.52 3530.98 47557.48 00:07:52.035 ======================================================== 00:07:52.035 Total : 25087.90 98.00 10204.33 1497.18 63392.78 00:07:52.035 00:07:52.035 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:52.035 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e5e9c7e-d255-47b8-b76c-e81dc8d71a9d 00:07:52.295 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2b07293-4902-4c13-ad07-c9e25fa80c73 00:07:52.555 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:52.555 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:52.555 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:52.555 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.555 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.556 rmmod nvme_tcp 00:07:52.556 rmmod nvme_fabrics 00:07:52.556 rmmod nvme_keyring 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 321033 ']' 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 321033 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 321033 ']' 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 321033 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.556 05:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 321033 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 321033' 00:07:52.816 killing process with pid 321033 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 321033 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 321033 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.816 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.076 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.076 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.076 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.076 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.076 05:02:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.989 00:07:54.989 real 0m24.059s 00:07:54.989 user 1m4.388s 00:07:54.989 sys 0m10.175s 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 ************************************ 00:07:54.989 END TEST nvmf_lvol 00:07:54.989 ************************************ 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.989 ************************************ 00:07:54.989 START TEST nvmf_lvs_grow 00:07:54.989 ************************************ 00:07:54.989 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:55.251 * Looking for test storage... 00:07:55.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.251 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.252 --rc genhtml_branch_coverage=1 00:07:55.252 --rc genhtml_function_coverage=1 00:07:55.252 --rc genhtml_legend=1 00:07:55.252 --rc geninfo_all_blocks=1 00:07:55.252 --rc geninfo_unexecuted_blocks=1 00:07:55.252 00:07:55.252 ' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.252 --rc genhtml_branch_coverage=1 00:07:55.252 --rc genhtml_function_coverage=1 00:07:55.252 --rc genhtml_legend=1 00:07:55.252 --rc geninfo_all_blocks=1 00:07:55.252 --rc geninfo_unexecuted_blocks=1 00:07:55.252 00:07:55.252 ' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.252 --rc genhtml_branch_coverage=1 00:07:55.252 --rc genhtml_function_coverage=1 00:07:55.252 --rc genhtml_legend=1 00:07:55.252 --rc geninfo_all_blocks=1 00:07:55.252 --rc geninfo_unexecuted_blocks=1 00:07:55.252 00:07:55.252 ' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.252 --rc genhtml_branch_coverage=1 00:07:55.252 --rc genhtml_function_coverage=1 00:07:55.252 --rc genhtml_legend=1 00:07:55.252 --rc geninfo_all_blocks=1 00:07:55.252 --rc geninfo_unexecuted_blocks=1 00:07:55.252 00:07:55.252 ' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.252 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.253 05:02:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.387 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.387 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.387 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.388 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.388 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.388 05:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:08:03.388 00:08:03.388 --- 10.0.0.2 ping statistics --- 00:08:03.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.388 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:08:03.388 00:08:03.388 --- 10.0.0.1 ping statistics --- 00:08:03.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.388 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=327438 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 327438 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 327438 ']' 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.388 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.388 [2024-12-09 05:02:45.112576] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:03.388 [2024-12-09 05:02:45.112629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.388 [2024-12-09 05:02:45.208365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.388 [2024-12-09 05:02:45.247887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.388 [2024-12-09 05:02:45.247926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.388 [2024-12-09 05:02:45.247935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.388 [2024-12-09 05:02:45.247943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.388 [2024-12-09 05:02:45.247950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.388 [2024-12-09 05:02:45.248533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.648 05:02:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.909 [2024-12-09 05:02:46.155194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.909 ************************************ 00:08:03.909 START TEST lvs_grow_clean 00:08:03.909 ************************************ 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.909 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.169 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.169 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.169 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eabe03e5-616c-4003-a251-4867eed69f8a 00:08:04.169 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:04.169 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:04.430 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:04.430 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:04.430 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eabe03e5-616c-4003-a251-4867eed69f8a lvol 150 00:08:04.690 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b0d6dcfb-e510-4243-80c5-16fcb27bba05 00:08:04.690 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.690 05:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.950 [2024-12-09 05:02:47.167048] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.950 [2024-12-09 05:02:47.167098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.950 true 00:08:04.950 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:04.950 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:04.950 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:04.950 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.210 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b0d6dcfb-e510-4243-80c5-16fcb27bba05 00:08:05.471 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.471 [2024-12-09 05:02:47.897356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.471 05:02:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=328020 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 328020 /var/tmp/bdevperf.sock 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 328020 ']' 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.731 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:05.731 [2024-12-09 05:02:48.142802] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:05.731 [2024-12-09 05:02:48.142853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328020 ] 00:08:05.991 [2024-12-09 05:02:48.235630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.991 [2024-12-09 05:02:48.278589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.561 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.561 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:06.561 05:02:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:06.821 Nvme0n1 00:08:06.821 05:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:07.081 [ 00:08:07.081 { 00:08:07.081 "name": "Nvme0n1", 00:08:07.081 "aliases": [ 00:08:07.081 "b0d6dcfb-e510-4243-80c5-16fcb27bba05" 00:08:07.081 ], 00:08:07.081 "product_name": "NVMe disk", 00:08:07.081 "block_size": 4096, 00:08:07.081 "num_blocks": 38912, 00:08:07.081 "uuid": "b0d6dcfb-e510-4243-80c5-16fcb27bba05", 00:08:07.081 "numa_id": 1, 00:08:07.081 "assigned_rate_limits": { 00:08:07.081 "rw_ios_per_sec": 0, 00:08:07.081 "rw_mbytes_per_sec": 0, 00:08:07.081 "r_mbytes_per_sec": 0, 00:08:07.081 "w_mbytes_per_sec": 0 00:08:07.081 }, 00:08:07.081 "claimed": false, 00:08:07.081 "zoned": false, 00:08:07.081 "supported_io_types": { 00:08:07.081 "read": true, 00:08:07.081 "write": true, 00:08:07.081 "unmap": true, 00:08:07.081 "flush": true, 00:08:07.081 "reset": true, 00:08:07.081 "nvme_admin": true, 00:08:07.081 "nvme_io": true, 00:08:07.081 "nvme_io_md": false, 00:08:07.081 "write_zeroes": true, 00:08:07.081 "zcopy": false, 00:08:07.081 "get_zone_info": false, 00:08:07.081 "zone_management": false, 00:08:07.081 "zone_append": false, 00:08:07.081 "compare": true, 00:08:07.081 "compare_and_write": true, 00:08:07.081 "abort": true, 00:08:07.081 "seek_hole": false, 00:08:07.081 "seek_data": false, 00:08:07.081 "copy": true, 00:08:07.081 "nvme_iov_md": false 00:08:07.081 }, 00:08:07.081 "memory_domains": [ 00:08:07.081 { 00:08:07.081 "dma_device_id": "system", 00:08:07.081 "dma_device_type": 1 00:08:07.081 } 00:08:07.081 ], 00:08:07.081 "driver_specific": { 00:08:07.081 "nvme": [ 00:08:07.081 { 00:08:07.081 "trid": { 00:08:07.081 "trtype": "TCP", 00:08:07.082 "adrfam": "IPv4", 00:08:07.082 "traddr": "10.0.0.2", 00:08:07.082 "trsvcid": "4420", 00:08:07.082 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:07.082 }, 00:08:07.082 "ctrlr_data": { 00:08:07.082 "cntlid": 1, 00:08:07.082 "vendor_id": "0x8086", 00:08:07.082 "model_number": "SPDK bdev Controller", 00:08:07.082 "serial_number": "SPDK0", 00:08:07.082 "firmware_revision": "25.01", 00:08:07.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.082 "oacs": { 00:08:07.082 "security": 0, 00:08:07.082 "format": 0, 00:08:07.082 "firmware": 0, 00:08:07.082 "ns_manage": 0 00:08:07.082 }, 00:08:07.082 "multi_ctrlr": true, 00:08:07.082 "ana_reporting": false 00:08:07.082 }, 00:08:07.082 "vs": { 00:08:07.082 "nvme_version": "1.3" 00:08:07.082 }, 00:08:07.082 "ns_data": { 00:08:07.082 "id": 1, 00:08:07.082 "can_share": true 00:08:07.082 } 00:08:07.082 } 00:08:07.082 ], 00:08:07.082 "mp_policy": "active_passive" 00:08:07.082 } 00:08:07.082 } 00:08:07.082 ] 00:08:07.082 05:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=328243 00:08:07.082 05:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.082 05:02:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:07.082 Running I/O for 10 seconds... 00:08:08.473 Latency(us) 00:08:08.473 [2024-12-09T04:02:50.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.473 Nvme0n1 : 1.00 23331.00 91.14 0.00 0.00 0.00 0.00 0.00 00:08:08.473 [2024-12-09T04:02:50.943Z] =================================================================================================================== 00:08:08.473 [2024-12-09T04:02:50.943Z] Total : 23331.00 91.14 0.00 0.00 0.00 0.00 0.00 00:08:08.473 00:08:09.042 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:09.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.303 Nvme0n1 : 2.00 23582.00 92.12 0.00 0.00 0.00 0.00 0.00 00:08:09.303 [2024-12-09T04:02:51.773Z] =================================================================================================================== 00:08:09.303 [2024-12-09T04:02:51.773Z] Total : 23582.00 92.12 0.00 0.00 0.00 0.00 0.00 00:08:09.303 00:08:09.303 true 00:08:09.303 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:09.303 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:09.563 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:09.563 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:09.563 05:02:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 328243 00:08:10.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.133 Nvme0n1 : 3.00 23702.67 92.59 0.00 0.00 0.00 0.00 0.00 00:08:10.133 [2024-12-09T04:02:52.603Z] =================================================================================================================== 00:08:10.133 [2024-12-09T04:02:52.603Z] Total : 23702.67 92.59 0.00 0.00 0.00 0.00 0.00 00:08:10.133 00:08:11.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.071 Nvme0n1 : 4.00 23801.00 92.97 0.00 0.00 0.00 0.00 0.00 00:08:11.071 [2024-12-09T04:02:53.541Z] =================================================================================================================== 00:08:11.071 [2024-12-09T04:02:53.541Z] Total : 23801.00 92.97 0.00 0.00 0.00 0.00 0.00 00:08:11.071 00:08:12.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.452 Nvme0n1 : 5.00 23862.40 93.21 0.00 0.00 0.00 0.00 0.00 00:08:12.452 [2024-12-09T04:02:54.922Z] =================================================================================================================== 00:08:12.452 [2024-12-09T04:02:54.922Z] Total : 23862.40 93.21 0.00 0.00 0.00 0.00 0.00 00:08:12.452 00:08:13.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.390 Nvme0n1 : 6.00 23920.50 93.44 0.00 0.00 0.00 0.00 0.00 00:08:13.390 [2024-12-09T04:02:55.860Z] =================================================================================================================== 00:08:13.390 [2024-12-09T04:02:55.860Z] Total : 23920.50 93.44 0.00 0.00 0.00 0.00 0.00 00:08:13.390 00:08:14.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.330 Nvme0n1 : 7.00 23954.71 93.57 0.00 0.00 0.00 0.00 0.00 00:08:14.330 [2024-12-09T04:02:56.800Z] =================================================================================================================== 00:08:14.330 [2024-12-09T04:02:56.800Z] Total : 23954.71 93.57 0.00 0.00 0.00 0.00 0.00 00:08:14.330 00:08:15.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.270 Nvme0n1 : 8.00 23983.25 93.68 0.00 0.00 0.00 0.00 0.00 00:08:15.270 [2024-12-09T04:02:57.740Z] =================================================================================================================== 00:08:15.270 [2024-12-09T04:02:57.740Z] Total : 23983.25 93.68 0.00 0.00 0.00 0.00 0.00 00:08:15.270 00:08:16.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.208 Nvme0n1 : 9.00 24014.33 93.81 0.00 0.00 0.00 0.00 0.00 00:08:16.208 [2024-12-09T04:02:58.678Z] =================================================================================================================== 00:08:16.208 [2024-12-09T04:02:58.678Z] Total : 24014.33 93.81 0.00 0.00 0.00 0.00 0.00 00:08:16.208 00:08:17.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.147 Nvme0n1 : 10.00 24035.90 93.89 0.00 0.00 0.00 0.00 0.00 00:08:17.147 [2024-12-09T04:02:59.617Z] =================================================================================================================== 00:08:17.147 [2024-12-09T04:02:59.617Z] Total : 24035.90 93.89 0.00 0.00 0.00 0.00 0.00 00:08:17.147 00:08:17.147 00:08:17.147 Latency(us) 00:08:17.147 [2024-12-09T04:02:59.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.147 Nvme0n1 : 10.00 24034.03 93.88 0.00 0.00 5322.35 1625.29 10223.62 00:08:17.147 [2024-12-09T04:02:59.617Z] =================================================================================================================== 00:08:17.147 [2024-12-09T04:02:59.617Z] Total : 24034.03 93.88 0.00 0.00 5322.35 1625.29 10223.62 00:08:17.147 { 00:08:17.147 "results": [ 00:08:17.147 { 00:08:17.147 "job": "Nvme0n1", 00:08:17.147 "core_mask": "0x2", 00:08:17.147 "workload": "randwrite", 00:08:17.147 "status": "finished", 00:08:17.147 "queue_depth": 128, 00:08:17.147 "io_size": 4096, 00:08:17.147 "runtime": 10.003399, 00:08:17.147 "iops": 24034.03083291989, 00:08:17.147 "mibps": 93.88293294109332, 00:08:17.147 "io_failed": 0, 00:08:17.147 "io_timeout": 0, 00:08:17.147 "avg_latency_us": 5322.349017058339, 00:08:17.147 "min_latency_us": 1625.2928, 00:08:17.147 "max_latency_us": 10223.616 00:08:17.147 } 00:08:17.147 ], 00:08:17.147 "core_count": 1 00:08:17.147 } 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 328020 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 328020 ']' 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 328020 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.147 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328020 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328020' 00:08:17.406 killing process with pid 328020 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 328020 00:08:17.406 Received shutdown signal, test time was about 10.000000 seconds 00:08:17.406 00:08:17.406 Latency(us) 00:08:17.406 [2024-12-09T04:02:59.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.406 [2024-12-09T04:02:59.876Z] =================================================================================================================== 00:08:17.406 [2024-12-09T04:02:59.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 328020 00:08:17.406 05:02:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.665 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.924 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:17.924 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:18.183 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:18.183 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:18.183 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.183 [2024-12-09 05:03:00.644613] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:18.443 request: 00:08:18.443 { 00:08:18.443 "uuid": "eabe03e5-616c-4003-a251-4867eed69f8a", 00:08:18.443 "method": "bdev_lvol_get_lvstores", 00:08:18.443 "req_id": 1 00:08:18.443 } 00:08:18.443 Got JSON-RPC error response 00:08:18.443 response: 00:08:18.443 { 00:08:18.443 "code": -19, 00:08:18.443 "message": "No such device" 00:08:18.443 } 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.443 05:03:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.703 aio_bdev 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b0d6dcfb-e510-4243-80c5-16fcb27bba05 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b0d6dcfb-e510-4243-80c5-16fcb27bba05 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:18.703 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.962 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b0d6dcfb-e510-4243-80c5-16fcb27bba05 -t 2000 00:08:18.962 [ 00:08:18.962 { 00:08:18.962 "name": "b0d6dcfb-e510-4243-80c5-16fcb27bba05", 00:08:18.962 "aliases": [ 00:08:18.962 "lvs/lvol" 00:08:18.962 ], 00:08:18.962 "product_name": "Logical Volume", 00:08:18.962 "block_size": 4096, 00:08:18.962 "num_blocks": 38912, 00:08:18.962 "uuid": "b0d6dcfb-e510-4243-80c5-16fcb27bba05", 00:08:18.962 "assigned_rate_limits": { 00:08:18.962 "rw_ios_per_sec": 0, 00:08:18.962 "rw_mbytes_per_sec": 0, 00:08:18.962 "r_mbytes_per_sec": 0, 00:08:18.962 "w_mbytes_per_sec": 0 00:08:18.962 }, 00:08:18.962 "claimed": false, 00:08:18.962 "zoned": false, 00:08:18.962 "supported_io_types": { 00:08:18.962 "read": true, 00:08:18.962 "write": true, 00:08:18.962 "unmap": true, 00:08:18.962 "flush": false, 00:08:18.962 "reset": true, 00:08:18.962 "nvme_admin": false, 00:08:18.962 "nvme_io": false, 00:08:18.962 "nvme_io_md": false, 00:08:18.962 "write_zeroes": true, 00:08:18.962 "zcopy": false, 00:08:18.962 "get_zone_info": false, 00:08:18.962 "zone_management": false, 00:08:18.962 "zone_append": false, 00:08:18.962 "compare": false, 00:08:18.962 "compare_and_write": false, 00:08:18.962 "abort": false, 00:08:18.962 "seek_hole": true, 00:08:18.962 "seek_data": true, 00:08:18.962 "copy": false, 00:08:18.963 "nvme_iov_md": false 00:08:18.963 }, 00:08:18.963 "driver_specific": { 00:08:18.963 "lvol": { 00:08:18.963 "lvol_store_uuid": "eabe03e5-616c-4003-a251-4867eed69f8a", 00:08:18.963 "base_bdev": "aio_bdev", 00:08:18.963 "thin_provision": false, 00:08:18.963 "num_allocated_clusters": 38, 00:08:18.963 "snapshot": false, 00:08:18.963 "clone": false, 00:08:18.963 "esnap_clone": false 00:08:18.963 } 00:08:18.963 } 00:08:18.963 } 00:08:18.963 ] 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:19.223 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:19.483 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:19.483 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b0d6dcfb-e510-4243-80c5-16fcb27bba05 00:08:19.742 05:03:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eabe03e5-616c-4003-a251-4867eed69f8a 00:08:19.742 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.002 00:08:20.002 real 0m16.216s 00:08:20.002 user 0m15.384s 00:08:20.002 sys 0m2.003s 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:20.002 ************************************ 00:08:20.002 END TEST lvs_grow_clean 00:08:20.002 ************************************ 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.002 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:20.262 ************************************ 00:08:20.262 START TEST lvs_grow_dirty 00:08:20.262 ************************************ 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.262 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.522 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:20.522 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:20.522 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:20.522 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:20.522 05:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:20.783 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:20.783 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:20.783 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 lvol 150 00:08:21.047 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:21.047 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.047 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:21.047 [2024-12-09 05:03:03.468098] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:21.047 [2024-12-09 05:03:03.468149] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:21.047 true 00:08:21.047 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:21.047 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:21.308 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:21.308 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:21.568 05:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:21.568 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.828 [2024-12-09 05:03:04.190226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.828 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.089 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=331078 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 331078 /var/tmp/bdevperf.sock 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 331078 ']' 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.090 05:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.090 [2024-12-09 05:03:04.416165] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:22.090 [2024-12-09 05:03:04.416221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331078 ] 00:08:22.090 [2024-12-09 05:03:04.509119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.090 [2024-12-09 05:03:04.551007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.031 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.031 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:23.031 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:23.291 Nvme0n1 00:08:23.291 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:23.291 [ 00:08:23.291 { 00:08:23.291 "name": "Nvme0n1", 00:08:23.291 "aliases": [ 00:08:23.291 "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4" 00:08:23.291 ], 00:08:23.291 "product_name": "NVMe disk", 00:08:23.291 "block_size": 4096, 00:08:23.291 "num_blocks": 38912, 00:08:23.291 "uuid": "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4", 00:08:23.291 "numa_id": 1, 00:08:23.291 "assigned_rate_limits": { 00:08:23.291 "rw_ios_per_sec": 0, 00:08:23.291 "rw_mbytes_per_sec": 0, 00:08:23.291 "r_mbytes_per_sec": 0, 00:08:23.291 "w_mbytes_per_sec": 0 00:08:23.291 }, 00:08:23.291 "claimed": false, 00:08:23.291 "zoned": false, 00:08:23.291 "supported_io_types": { 00:08:23.291 "read": true, 00:08:23.291 "write": true, 00:08:23.291 "unmap": true, 00:08:23.291 "flush": true, 00:08:23.291 "reset": true, 00:08:23.291 "nvme_admin": true, 00:08:23.291 "nvme_io": true, 00:08:23.291 "nvme_io_md": false, 00:08:23.291 "write_zeroes": true, 00:08:23.291 "zcopy": false, 00:08:23.291 "get_zone_info": false, 00:08:23.291 "zone_management": false, 00:08:23.291 "zone_append": false, 00:08:23.291 "compare": true, 00:08:23.291 "compare_and_write": true, 00:08:23.291 "abort": true, 00:08:23.291 "seek_hole": false, 00:08:23.291 "seek_data": false, 00:08:23.291 "copy": true, 00:08:23.291 "nvme_iov_md": false 00:08:23.291 }, 00:08:23.291 "memory_domains": [ 00:08:23.291 { 00:08:23.291 "dma_device_id": "system", 00:08:23.291 "dma_device_type": 1 00:08:23.291 } 00:08:23.291 ], 00:08:23.291 "driver_specific": { 00:08:23.291 "nvme": [ 00:08:23.291 { 00:08:23.291 "trid": { 00:08:23.291 "trtype": "TCP", 00:08:23.291 "adrfam": "IPv4", 00:08:23.291 "traddr": "10.0.0.2", 00:08:23.291 "trsvcid": "4420", 00:08:23.291 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:23.291 }, 00:08:23.291 "ctrlr_data": { 00:08:23.291 "cntlid": 1, 00:08:23.291 "vendor_id": "0x8086", 00:08:23.291 "model_number": "SPDK bdev Controller", 00:08:23.291 "serial_number": "SPDK0", 00:08:23.291 "firmware_revision": "25.01", 00:08:23.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.291 "oacs": { 00:08:23.291 "security": 0, 00:08:23.291 "format": 0, 00:08:23.291 "firmware": 0, 00:08:23.291 "ns_manage": 0 00:08:23.291 }, 00:08:23.291 "multi_ctrlr": true, 00:08:23.291 "ana_reporting": false 00:08:23.291 }, 00:08:23.291 "vs": { 00:08:23.291 "nvme_version": "1.3" 00:08:23.291 }, 00:08:23.291 "ns_data": { 00:08:23.291 "id": 1, 00:08:23.291 "can_share": true 00:08:23.291 } 00:08:23.291 } 00:08:23.291 ], 00:08:23.291 "mp_policy": "active_passive" 00:08:23.291 } 00:08:23.291 } 00:08:23.291 ] 00:08:23.292 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=331581 00:08:23.292 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:23.292 05:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:23.551 Running I/O for 10 seconds... 00:08:24.490 Latency(us) 00:08:24.490 [2024-12-09T04:03:06.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.490 Nvme0n1 : 1.00 23624.00 92.28 0.00 0.00 0.00 0.00 0.00 00:08:24.490 [2024-12-09T04:03:06.960Z] =================================================================================================================== 00:08:24.490 [2024-12-09T04:03:06.960Z] Total : 23624.00 92.28 0.00 0.00 0.00 0.00 0.00 00:08:24.490 00:08:25.430 05:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:25.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.430 Nvme0n1 : 2.00 23812.00 93.02 0.00 0.00 0.00 0.00 0.00 00:08:25.430 [2024-12-09T04:03:07.900Z] =================================================================================================================== 00:08:25.430 [2024-12-09T04:03:07.900Z] Total : 23812.00 93.02 0.00 0.00 0.00 0.00 0.00 00:08:25.430 00:08:25.430 true 00:08:25.690 05:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:25.690 05:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:25.690 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:25.690 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:25.690 05:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 331581 00:08:26.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.642 Nvme0n1 : 3.00 23856.00 93.19 0.00 0.00 0.00 0.00 0.00 00:08:26.642 [2024-12-09T04:03:09.112Z] =================================================================================================================== 00:08:26.642 [2024-12-09T04:03:09.112Z] Total : 23856.00 93.19 0.00 0.00 0.00 0.00 0.00 00:08:26.642 00:08:27.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.580 Nvme0n1 : 4.00 23926.00 93.46 0.00 0.00 0.00 0.00 0.00 00:08:27.580 [2024-12-09T04:03:10.050Z] =================================================================================================================== 00:08:27.580 [2024-12-09T04:03:10.050Z] Total : 23926.00 93.46 0.00 0.00 0.00 0.00 0.00 00:08:27.580 00:08:28.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.520 Nvme0n1 : 5.00 23878.80 93.28 0.00 0.00 0.00 0.00 0.00 00:08:28.520 [2024-12-09T04:03:10.990Z] =================================================================================================================== 00:08:28.520 [2024-12-09T04:03:10.990Z] Total : 23878.80 93.28 0.00 0.00 0.00 0.00 0.00 00:08:28.520 00:08:29.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.464 Nvme0n1 : 6.00 23926.67 93.46 0.00 0.00 0.00 0.00 0.00 00:08:29.464 [2024-12-09T04:03:11.934Z] =================================================================================================================== 00:08:29.464 [2024-12-09T04:03:11.934Z] Total : 23926.67 93.46 0.00 0.00 0.00 0.00 0.00 00:08:29.464 00:08:30.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.406 Nvme0n1 : 7.00 23984.29 93.69 0.00 0.00 0.00 0.00 0.00 00:08:30.406 [2024-12-09T04:03:12.876Z] =================================================================================================================== 00:08:30.406 [2024-12-09T04:03:12.876Z] Total : 23984.29 93.69 0.00 0.00 0.00 0.00 0.00 00:08:30.406 00:08:31.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.348 Nvme0n1 : 8.00 24027.62 93.86 0.00 0.00 0.00 0.00 0.00 00:08:31.348 [2024-12-09T04:03:13.818Z] =================================================================================================================== 00:08:31.348 [2024-12-09T04:03:13.818Z] Total : 24027.62 93.86 0.00 0.00 0.00 0.00 0.00 00:08:31.348 00:08:32.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.727 Nvme0n1 : 9.00 24049.78 93.94 0.00 0.00 0.00 0.00 0.00 00:08:32.727 [2024-12-09T04:03:15.197Z] =================================================================================================================== 00:08:32.727 [2024-12-09T04:03:15.197Z] Total : 24049.78 93.94 0.00 0.00 0.00 0.00 0.00 00:08:32.727 00:08:33.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.665 Nvme0n1 : 10.00 24069.30 94.02 0.00 0.00 0.00 0.00 0.00 00:08:33.665 [2024-12-09T04:03:16.135Z] =================================================================================================================== 00:08:33.665 [2024-12-09T04:03:16.135Z] Total : 24069.30 94.02 0.00 0.00 0.00 0.00 0.00 00:08:33.665 00:08:33.665 00:08:33.665 Latency(us) 00:08:33.665 [2024-12-09T04:03:16.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.665 Nvme0n1 : 10.00 24074.70 94.04 0.00 0.00 5314.19 3040.87 15309.21 00:08:33.665 [2024-12-09T04:03:16.135Z] =================================================================================================================== 00:08:33.665 [2024-12-09T04:03:16.135Z] Total : 24074.70 94.04 0.00 0.00 5314.19 3040.87 15309.21 00:08:33.665 { 00:08:33.665 "results": [ 00:08:33.665 { 00:08:33.665 "job": "Nvme0n1", 00:08:33.665 "core_mask": "0x2", 00:08:33.665 "workload": "randwrite", 00:08:33.665 "status": "finished", 00:08:33.665 "queue_depth": 128, 00:08:33.665 "io_size": 4096, 00:08:33.665 "runtime": 10.003072, 00:08:33.665 "iops": 24074.704250854138, 00:08:33.665 "mibps": 94.04181347989898, 00:08:33.665 "io_failed": 0, 00:08:33.665 "io_timeout": 0, 00:08:33.665 "avg_latency_us": 5314.191618358864, 00:08:33.665 "min_latency_us": 3040.8704, 00:08:33.665 "max_latency_us": 15309.2096 00:08:33.665 } 00:08:33.665 ], 00:08:33.665 "core_count": 1 00:08:33.665 } 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 331078 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 331078 ']' 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 331078 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331078 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331078' 00:08:33.665 killing process with pid 331078 00:08:33.665 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 331078 00:08:33.665 Received shutdown signal, test time was about 10.000000 seconds 00:08:33.665 00:08:33.665 Latency(us) 00:08:33.665 [2024-12-09T04:03:16.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.665 [2024-12-09T04:03:16.135Z] =================================================================================================================== 00:08:33.665 [2024-12-09T04:03:16.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:33.666 05:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 331078 00:08:33.666 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.926 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.184 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:34.184 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 327438 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 327438 00:08:34.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 327438 Killed "${NVMF_APP[@]}" "$@" 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=333470 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 333470 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:34.444 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 333470 ']' 00:08:34.445 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.445 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.445 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.445 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.445 05:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.445 [2024-12-09 05:03:16.764648] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:34.445 [2024-12-09 05:03:16.764699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.445 [2024-12-09 05:03:16.859306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.445 [2024-12-09 05:03:16.899796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.445 [2024-12-09 05:03:16.899833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.445 [2024-12-09 05:03:16.899843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.445 [2024-12-09 05:03:16.899851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.445 [2024-12-09 05:03:16.899859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.445 [2024-12-09 05:03:16.900431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.382 [2024-12-09 05:03:17.800801] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:35.382 [2024-12-09 05:03:17.800886] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:35.382 [2024-12-09 05:03:17.800912] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.382 05:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:35.641 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 -t 2000 00:08:35.900 [ 00:08:35.900 { 00:08:35.900 "name": "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4", 00:08:35.900 "aliases": [ 00:08:35.900 "lvs/lvol" 00:08:35.900 ], 00:08:35.900 "product_name": "Logical Volume", 00:08:35.900 "block_size": 4096, 00:08:35.900 "num_blocks": 38912, 00:08:35.900 "uuid": "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4", 00:08:35.900 "assigned_rate_limits": { 00:08:35.900 "rw_ios_per_sec": 0, 00:08:35.900 "rw_mbytes_per_sec": 0, 00:08:35.900 "r_mbytes_per_sec": 0, 00:08:35.900 "w_mbytes_per_sec": 0 00:08:35.900 }, 00:08:35.900 "claimed": false, 00:08:35.900 "zoned": false, 00:08:35.900 "supported_io_types": { 00:08:35.900 "read": true, 00:08:35.900 "write": true, 00:08:35.900 "unmap": true, 00:08:35.900 "flush": false, 00:08:35.900 "reset": true, 00:08:35.900 "nvme_admin": false, 00:08:35.900 "nvme_io": false, 00:08:35.900 "nvme_io_md": false, 00:08:35.900 "write_zeroes": true, 00:08:35.900 "zcopy": false, 00:08:35.900 "get_zone_info": false, 00:08:35.900 "zone_management": false, 00:08:35.900 "zone_append": false, 00:08:35.900 "compare": false, 00:08:35.900 "compare_and_write": false, 00:08:35.900 "abort": false, 00:08:35.900 "seek_hole": true, 00:08:35.900 "seek_data": true, 00:08:35.900 "copy": false, 00:08:35.900 "nvme_iov_md": false 00:08:35.900 }, 00:08:35.900 "driver_specific": { 00:08:35.900 "lvol": { 00:08:35.900 "lvol_store_uuid": "4ec60d4f-8420-49b6-975d-51e2509f1e02", 00:08:35.900 "base_bdev": "aio_bdev", 00:08:35.900 "thin_provision": false, 00:08:35.900 "num_allocated_clusters": 38, 00:08:35.900 "snapshot": false, 00:08:35.900 "clone": false, 00:08:35.900 "esnap_clone": false 00:08:35.900 } 00:08:35.900 } 00:08:35.900 } 00:08:35.900 ] 00:08:35.900 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:35.900 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:35.900 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:36.160 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:36.160 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:36.160 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:36.160 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:36.160 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.420 [2024-12-09 05:03:18.737491] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.420 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:36.679 request: 00:08:36.679 { 00:08:36.679 "uuid": "4ec60d4f-8420-49b6-975d-51e2509f1e02", 00:08:36.679 "method": "bdev_lvol_get_lvstores", 00:08:36.679 "req_id": 1 00:08:36.679 } 00:08:36.679 Got JSON-RPC error response 00:08:36.679 response: 00:08:36.679 { 00:08:36.679 "code": -19, 00:08:36.679 "message": "No such device" 00:08:36.679 } 00:08:36.679 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:36.679 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.679 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:36.679 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.679 05:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:36.939 aio_bdev 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.939 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 -t 2000 00:08:37.199 [ 00:08:37.199 { 00:08:37.199 "name": "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4", 00:08:37.200 "aliases": [ 00:08:37.200 "lvs/lvol" 00:08:37.200 ], 00:08:37.200 "product_name": "Logical Volume", 00:08:37.200 "block_size": 4096, 00:08:37.200 "num_blocks": 38912, 00:08:37.200 "uuid": "9de76f0f-b27d-42a2-9c91-48e1d3faf1b4", 00:08:37.200 "assigned_rate_limits": { 00:08:37.200 "rw_ios_per_sec": 0, 00:08:37.200 "rw_mbytes_per_sec": 0, 00:08:37.200 "r_mbytes_per_sec": 0, 00:08:37.200 "w_mbytes_per_sec": 0 00:08:37.200 }, 00:08:37.200 "claimed": false, 00:08:37.200 "zoned": false, 00:08:37.200 "supported_io_types": { 00:08:37.200 "read": true, 00:08:37.200 "write": true, 00:08:37.200 "unmap": true, 00:08:37.200 "flush": false, 00:08:37.200 "reset": true, 00:08:37.200 "nvme_admin": false, 00:08:37.200 "nvme_io": false, 00:08:37.200 "nvme_io_md": false, 00:08:37.200 "write_zeroes": true, 00:08:37.200 "zcopy": false, 00:08:37.200 "get_zone_info": false, 00:08:37.200 "zone_management": false, 00:08:37.200 "zone_append": false, 00:08:37.200 "compare": false, 00:08:37.200 "compare_and_write": false, 00:08:37.200 "abort": false, 00:08:37.200 "seek_hole": true, 00:08:37.200 "seek_data": true, 00:08:37.200 "copy": false, 00:08:37.200 "nvme_iov_md": false 00:08:37.200 }, 00:08:37.200 "driver_specific": { 00:08:37.200 "lvol": { 00:08:37.200 "lvol_store_uuid": "4ec60d4f-8420-49b6-975d-51e2509f1e02", 00:08:37.200 "base_bdev": "aio_bdev", 00:08:37.200 "thin_provision": false, 00:08:37.200 "num_allocated_clusters": 38, 00:08:37.200 "snapshot": false, 00:08:37.200 "clone": false, 00:08:37.200 "esnap_clone": false 00:08:37.200 } 00:08:37.200 } 00:08:37.200 } 00:08:37.200 ] 00:08:37.200 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:37.200 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:37.200 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:37.461 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:37.461 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:37.461 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:37.461 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:37.461 05:03:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9de76f0f-b27d-42a2-9c91-48e1d3faf1b4 00:08:37.721 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ec60d4f-8420-49b6-975d-51e2509f1e02 00:08:37.981 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.242 00:08:38.242 real 0m18.048s 00:08:38.242 user 0m45.454s 00:08:38.242 sys 0m4.540s 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 ************************************ 00:08:38.242 END TEST lvs_grow_dirty 00:08:38.242 ************************************ 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:38.242 nvmf_trace.0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.242 rmmod nvme_tcp 00:08:38.242 rmmod nvme_fabrics 00:08:38.242 rmmod nvme_keyring 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 333470 ']' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 333470 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 333470 ']' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 333470 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.242 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 333470 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 333470' 00:08:38.503 killing process with pid 333470 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 333470 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 333470 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.503 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.763 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.763 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.763 05:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.783 00:08:40.783 real 0m45.602s 00:08:40.783 user 1m7.561s 00:08:40.783 sys 0m12.708s 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 ************************************ 00:08:40.783 END TEST nvmf_lvs_grow 00:08:40.783 ************************************ 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 ************************************ 00:08:40.783 START TEST nvmf_bdev_io_wait 00:08:40.783 ************************************ 00:08:40.783 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:41.059 * Looking for test storage... 00:08:41.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.059 --rc genhtml_branch_coverage=1 00:08:41.059 --rc genhtml_function_coverage=1 00:08:41.059 --rc genhtml_legend=1 00:08:41.059 --rc geninfo_all_blocks=1 00:08:41.059 --rc geninfo_unexecuted_blocks=1 00:08:41.059 00:08:41.059 ' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.059 --rc genhtml_branch_coverage=1 00:08:41.059 --rc genhtml_function_coverage=1 00:08:41.059 --rc genhtml_legend=1 00:08:41.059 --rc geninfo_all_blocks=1 00:08:41.059 --rc geninfo_unexecuted_blocks=1 00:08:41.059 00:08:41.059 ' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.059 --rc genhtml_branch_coverage=1 00:08:41.059 --rc genhtml_function_coverage=1 00:08:41.059 --rc genhtml_legend=1 00:08:41.059 --rc geninfo_all_blocks=1 00:08:41.059 --rc geninfo_unexecuted_blocks=1 00:08:41.059 00:08:41.059 ' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.059 --rc genhtml_branch_coverage=1 00:08:41.059 --rc genhtml_function_coverage=1 00:08:41.059 --rc genhtml_legend=1 00:08:41.059 --rc geninfo_all_blocks=1 00:08:41.059 --rc geninfo_unexecuted_blocks=1 00:08:41.059 00:08:41.059 ' 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.059 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.060 05:03:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.321 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.321 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.321 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.321 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:08:49.322 00:08:49.322 --- 10.0.0.2 ping statistics --- 00:08:49.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.322 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:49.322 00:08:49.322 --- 10.0.0.1 ping statistics --- 00:08:49.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.322 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=338052 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 338052 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 338052 ']' 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.322 05:03:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 [2024-12-09 05:03:30.801689] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:49.322 [2024-12-09 05:03:30.801743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.322 [2024-12-09 05:03:30.897824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.322 [2024-12-09 05:03:30.940914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.322 [2024-12-09 05:03:30.940952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.322 [2024-12-09 05:03:30.940961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.322 [2024-12-09 05:03:30.940970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.322 [2024-12-09 05:03:30.940976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.322 [2024-12-09 05:03:30.942777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.322 [2024-12-09 05:03:30.942814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.322 [2024-12-09 05:03:30.942925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.322 [2024-12-09 05:03:30.942926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.322 [2024-12-09 05:03:31.760942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.322 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.583 Malloc0 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.583 [2024-12-09 05:03:31.819193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=338335 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=338337 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.583 { 00:08:49.583 "params": { 00:08:49.583 "name": "Nvme$subsystem", 00:08:49.583 "trtype": "$TEST_TRANSPORT", 00:08:49.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.583 "adrfam": "ipv4", 00:08:49.583 "trsvcid": "$NVMF_PORT", 00:08:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.583 "hdgst": ${hdgst:-false}, 00:08:49.583 "ddgst": ${ddgst:-false} 00:08:49.583 }, 00:08:49.583 "method": "bdev_nvme_attach_controller" 00:08:49.583 } 00:08:49.583 EOF 00:08:49.583 )") 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=338339 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.583 { 00:08:49.583 "params": { 00:08:49.583 "name": "Nvme$subsystem", 00:08:49.583 "trtype": "$TEST_TRANSPORT", 00:08:49.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.583 "adrfam": "ipv4", 00:08:49.583 "trsvcid": "$NVMF_PORT", 00:08:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.583 "hdgst": ${hdgst:-false}, 00:08:49.583 "ddgst": ${ddgst:-false} 00:08:49.583 }, 00:08:49.583 "method": "bdev_nvme_attach_controller" 00:08:49.583 } 00:08:49.583 EOF 00:08:49.583 )") 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=338342 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.583 { 00:08:49.583 "params": { 00:08:49.583 "name": "Nvme$subsystem", 00:08:49.583 "trtype": "$TEST_TRANSPORT", 00:08:49.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.583 "adrfam": "ipv4", 00:08:49.583 "trsvcid": "$NVMF_PORT", 00:08:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.583 "hdgst": ${hdgst:-false}, 00:08:49.583 "ddgst": ${ddgst:-false} 00:08:49.583 }, 00:08:49.583 "method": "bdev_nvme_attach_controller" 00:08:49.583 } 00:08:49.583 EOF 00:08:49.583 )") 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.583 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.583 { 00:08:49.583 "params": { 00:08:49.583 "name": "Nvme$subsystem", 00:08:49.583 "trtype": "$TEST_TRANSPORT", 00:08:49.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.583 "adrfam": "ipv4", 00:08:49.583 "trsvcid": "$NVMF_PORT", 00:08:49.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.583 "hdgst": ${hdgst:-false}, 00:08:49.583 "ddgst": ${ddgst:-false} 00:08:49.583 }, 00:08:49.584 "method": "bdev_nvme_attach_controller" 00:08:49.584 } 00:08:49.584 EOF 00:08:49.584 )") 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 338335 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.584 "params": { 00:08:49.584 "name": "Nvme1", 00:08:49.584 "trtype": "tcp", 00:08:49.584 "traddr": "10.0.0.2", 00:08:49.584 "adrfam": "ipv4", 00:08:49.584 "trsvcid": "4420", 00:08:49.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.584 "hdgst": false, 00:08:49.584 "ddgst": false 00:08:49.584 }, 00:08:49.584 "method": "bdev_nvme_attach_controller" 00:08:49.584 }' 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.584 "params": { 00:08:49.584 "name": "Nvme1", 00:08:49.584 "trtype": "tcp", 00:08:49.584 "traddr": "10.0.0.2", 00:08:49.584 "adrfam": "ipv4", 00:08:49.584 "trsvcid": "4420", 00:08:49.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.584 "hdgst": false, 00:08:49.584 "ddgst": false 00:08:49.584 }, 00:08:49.584 "method": "bdev_nvme_attach_controller" 00:08:49.584 }' 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.584 "params": { 00:08:49.584 "name": "Nvme1", 00:08:49.584 "trtype": "tcp", 00:08:49.584 "traddr": "10.0.0.2", 00:08:49.584 "adrfam": "ipv4", 00:08:49.584 "trsvcid": "4420", 00:08:49.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.584 "hdgst": false, 00:08:49.584 "ddgst": false 00:08:49.584 }, 00:08:49.584 "method": "bdev_nvme_attach_controller" 00:08:49.584 }' 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:49.584 05:03:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.584 "params": { 00:08:49.584 "name": "Nvme1", 00:08:49.584 "trtype": "tcp", 00:08:49.584 "traddr": "10.0.0.2", 00:08:49.584 "adrfam": "ipv4", 00:08:49.584 "trsvcid": "4420", 00:08:49.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.584 "hdgst": false, 00:08:49.584 "ddgst": false 00:08:49.584 }, 00:08:49.584 "method": "bdev_nvme_attach_controller" 00:08:49.584 }' 00:08:49.584 [2024-12-09 05:03:31.855732] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:49.584 [2024-12-09 05:03:31.855783] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:49.584 [2024-12-09 05:03:31.872968] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:49.584 [2024-12-09 05:03:31.873012] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:49.584 [2024-12-09 05:03:31.877695] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:49.584 [2024-12-09 05:03:31.877743] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:49.584 [2024-12-09 05:03:31.877768] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:08:49.584 [2024-12-09 05:03:31.877810] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:49.584 [2024-12-09 05:03:32.039901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.844 [2024-12-09 05:03:32.080602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:49.844 [2024-12-09 05:03:32.127800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.844 [2024-12-09 05:03:32.187406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:49.844 [2024-12-09 05:03:32.220364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.844 [2024-12-09 05:03:32.262443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:49.844 [2024-12-09 05:03:32.269578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.844 [2024-12-09 05:03:32.310324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:50.104 Running I/O for 1 seconds... 00:08:50.104 Running I/O for 1 seconds... 00:08:50.104 Running I/O for 1 seconds... 00:08:50.104 Running I/O for 1 seconds... 00:08:51.042 14341.00 IOPS, 56.02 MiB/s 00:08:51.042 Latency(us) 00:08:51.042 [2024-12-09T04:03:33.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.042 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:51.042 Nvme1n1 : 1.01 14397.51 56.24 0.00 0.00 8865.11 4771.02 15833.50 00:08:51.042 [2024-12-09T04:03:33.512Z] =================================================================================================================== 00:08:51.042 [2024-12-09T04:03:33.512Z] Total : 14397.51 56.24 0.00 0.00 8865.11 4771.02 15833.50 00:08:51.042 247312.00 IOPS, 966.06 MiB/s 00:08:51.042 Latency(us) 00:08:51.042 [2024-12-09T04:03:33.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.042 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:51.042 Nvme1n1 : 1.00 246949.08 964.64 0.00 0.00 515.48 224.46 1468.01 00:08:51.042 [2024-12-09T04:03:33.512Z] =================================================================================================================== 00:08:51.042 [2024-12-09T04:03:33.512Z] Total : 246949.08 964.64 0.00 0.00 515.48 224.46 1468.01 00:08:51.301 10120.00 IOPS, 39.53 MiB/s 00:08:51.301 Latency(us) 00:08:51.301 [2024-12-09T04:03:33.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.301 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:51.301 Nvme1n1 : 1.01 10188.55 39.80 0.00 0.00 12518.51 5557.45 23697.82 00:08:51.301 [2024-12-09T04:03:33.771Z] =================================================================================================================== 00:08:51.301 [2024-12-09T04:03:33.771Z] Total : 10188.55 39.80 0.00 0.00 12518.51 5557.45 23697.82 00:08:51.301 9451.00 IOPS, 36.92 MiB/s 00:08:51.301 Latency(us) 00:08:51.301 [2024-12-09T04:03:33.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.301 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:51.301 Nvme1n1 : 1.01 9526.18 37.21 0.00 0.00 13394.33 4823.45 23802.68 00:08:51.301 [2024-12-09T04:03:33.771Z] =================================================================================================================== 00:08:51.301 [2024-12-09T04:03:33.771Z] Total : 9526.18 37.21 0.00 0.00 13394.33 4823.45 23802.68 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 338337 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 338339 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 338342 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.301 rmmod nvme_tcp 00:08:51.301 rmmod nvme_fabrics 00:08:51.301 rmmod nvme_keyring 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 338052 ']' 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 338052 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 338052 ']' 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 338052 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:51.301 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338052 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338052' 00:08:51.561 killing process with pid 338052 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 338052 00:08:51.561 05:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 338052 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.561 05:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:54.102 00:08:54.102 real 0m12.959s 00:08:54.102 user 0m19.408s 00:08:54.102 sys 0m7.636s 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 ************************************ 00:08:54.102 END TEST nvmf_bdev_io_wait 00:08:54.102 ************************************ 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.102 ************************************ 00:08:54.102 START TEST nvmf_queue_depth 00:08:54.102 ************************************ 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.102 * Looking for test storage... 00:08:54.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.102 --rc genhtml_branch_coverage=1 00:08:54.102 --rc genhtml_function_coverage=1 00:08:54.102 --rc genhtml_legend=1 00:08:54.102 --rc geninfo_all_blocks=1 00:08:54.102 --rc geninfo_unexecuted_blocks=1 00:08:54.102 00:08:54.102 ' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.102 --rc genhtml_branch_coverage=1 00:08:54.102 --rc genhtml_function_coverage=1 00:08:54.102 --rc genhtml_legend=1 00:08:54.102 --rc geninfo_all_blocks=1 00:08:54.102 --rc geninfo_unexecuted_blocks=1 00:08:54.102 00:08:54.102 ' 00:08:54.102 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:54.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.102 --rc genhtml_branch_coverage=1 00:08:54.103 --rc genhtml_function_coverage=1 00:08:54.103 --rc genhtml_legend=1 00:08:54.103 --rc geninfo_all_blocks=1 00:08:54.103 --rc geninfo_unexecuted_blocks=1 00:08:54.103 00:08:54.103 ' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:54.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.103 --rc genhtml_branch_coverage=1 00:08:54.103 --rc genhtml_function_coverage=1 00:08:54.103 --rc genhtml_legend=1 00:08:54.103 --rc geninfo_all_blocks=1 00:08:54.103 --rc geninfo_unexecuted_blocks=1 00:08:54.103 00:08:54.103 ' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.103 05:03:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:02.236 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.236 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:02.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:02.237 Found net devices under 0000:af:00.0: cvl_0_0 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:02.237 Found net devices under 0000:af:00.1: cvl_0_1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:09:02.237 00:09:02.237 --- 10.0.0.2 ping statistics --- 00:09:02.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.237 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:02.237 00:09:02.237 --- 10.0.0.1 ping statistics --- 00:09:02.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.237 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=342398 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 342398 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 342398 ']' 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.237 05:03:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.237 [2024-12-09 05:03:43.822887] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:09:02.237 [2024-12-09 05:03:43.822939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.237 [2024-12-09 05:03:43.927612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.237 [2024-12-09 05:03:43.966951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.237 [2024-12-09 05:03:43.966989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.237 [2024-12-09 05:03:43.966999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.237 [2024-12-09 05:03:43.967008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.237 [2024-12-09 05:03:43.967015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.237 [2024-12-09 05:03:43.967598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.237 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.237 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:02.237 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.237 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.238 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 [2024-12-09 05:03:44.713872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 Malloc0 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 [2024-12-09 05:03:44.756396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=342635 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 342635 /var/tmp/bdevperf.sock 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 342635 ']' 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.498 05:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.498 [2024-12-09 05:03:44.809634] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:09:02.498 [2024-12-09 05:03:44.809677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342635 ] 00:09:02.498 [2024-12-09 05:03:44.900979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.498 [2024-12-09 05:03:44.939566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.438 NVMe0n1 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.438 05:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.438 Running I/O for 10 seconds... 00:09:05.750 12288.00 IOPS, 48.00 MiB/s [2024-12-09T04:03:49.158Z] 12556.50 IOPS, 49.05 MiB/s [2024-12-09T04:03:50.097Z] 12631.67 IOPS, 49.34 MiB/s [2024-12-09T04:03:51.055Z] 12769.00 IOPS, 49.88 MiB/s [2024-12-09T04:03:51.995Z] 12689.80 IOPS, 49.57 MiB/s [2024-12-09T04:03:52.932Z] 12716.67 IOPS, 49.67 MiB/s [2024-12-09T04:03:53.868Z] 12732.29 IOPS, 49.74 MiB/s [2024-12-09T04:03:55.244Z] 12780.25 IOPS, 49.92 MiB/s [2024-12-09T04:03:56.181Z] 12807.11 IOPS, 50.03 MiB/s [2024-12-09T04:03:56.181Z] 12833.50 IOPS, 50.13 MiB/s 00:09:13.711 Latency(us) 00:09:13.711 [2024-12-09T04:03:56.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.711 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.711 Verification LBA range: start 0x0 length 0x4000 00:09:13.711 NVMe0n1 : 10.10 12810.85 50.04 0.00 0.00 79344.21 18245.22 60817.41 00:09:13.711 [2024-12-09T04:03:56.181Z] =================================================================================================================== 00:09:13.711 [2024-12-09T04:03:56.181Z] Total : 12810.85 50.04 0.00 0.00 79344.21 18245.22 60817.41 00:09:13.711 { 00:09:13.711 "results": [ 00:09:13.711 { 00:09:13.711 "job": "NVMe0n1", 00:09:13.711 "core_mask": "0x1", 00:09:13.711 "workload": "verify", 00:09:13.711 "status": "finished", 00:09:13.711 "verify_range": { 00:09:13.711 "start": 0, 00:09:13.711 "length": 16384 00:09:13.711 }, 00:09:13.711 "queue_depth": 1024, 00:09:13.711 "io_size": 4096, 00:09:13.711 "runtime": 10.096289, 00:09:13.711 "iops": 12810.84564833673, 00:09:13.711 "mibps": 50.04236581381535, 00:09:13.711 "io_failed": 0, 00:09:13.711 "io_timeout": 0, 00:09:13.711 "avg_latency_us": 79344.21337212661, 00:09:13.711 "min_latency_us": 18245.2224, 00:09:13.711 "max_latency_us": 60817.408 00:09:13.711 } 00:09:13.711 ], 00:09:13.711 "core_count": 1 00:09:13.711 } 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 342635 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 342635 ']' 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 342635 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.711 05:03:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342635 00:09:13.712 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.712 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.712 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342635' 00:09:13.712 killing process with pid 342635 00:09:13.712 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 342635 00:09:13.712 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.712 00:09:13.712 Latency(us) 00:09:13.712 [2024-12-09T04:03:56.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.712 [2024-12-09T04:03:56.182Z] =================================================================================================================== 00:09:13.712 [2024-12-09T04:03:56.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.712 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 342635 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.971 rmmod nvme_tcp 00:09:13.971 rmmod nvme_fabrics 00:09:13.971 rmmod nvme_keyring 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 342398 ']' 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 342398 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 342398 ']' 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 342398 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342398 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342398' 00:09:13.971 killing process with pid 342398 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 342398 00:09:13.971 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 342398 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.230 05:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.772 00:09:16.772 real 0m22.504s 00:09:16.772 user 0m25.574s 00:09:16.772 sys 0m7.470s 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 ************************************ 00:09:16.772 END TEST nvmf_queue_depth 00:09:16.772 ************************************ 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 ************************************ 00:09:16.772 START TEST nvmf_target_multipath 00:09:16.772 ************************************ 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.772 * Looking for test storage... 00:09:16.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.772 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.772 --rc genhtml_branch_coverage=1 00:09:16.772 --rc genhtml_function_coverage=1 00:09:16.772 --rc genhtml_legend=1 00:09:16.773 --rc geninfo_all_blocks=1 00:09:16.773 --rc geninfo_unexecuted_blocks=1 00:09:16.773 00:09:16.773 ' 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.773 --rc genhtml_branch_coverage=1 00:09:16.773 --rc genhtml_function_coverage=1 00:09:16.773 --rc genhtml_legend=1 00:09:16.773 --rc geninfo_all_blocks=1 00:09:16.773 --rc geninfo_unexecuted_blocks=1 00:09:16.773 00:09:16.773 ' 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.773 --rc genhtml_branch_coverage=1 00:09:16.773 --rc genhtml_function_coverage=1 00:09:16.773 --rc genhtml_legend=1 00:09:16.773 --rc geninfo_all_blocks=1 00:09:16.773 --rc geninfo_unexecuted_blocks=1 00:09:16.773 00:09:16.773 ' 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.773 --rc genhtml_branch_coverage=1 00:09:16.773 --rc genhtml_function_coverage=1 00:09:16.773 --rc genhtml_legend=1 00:09:16.773 --rc geninfo_all_blocks=1 00:09:16.773 --rc geninfo_unexecuted_blocks=1 00:09:16.773 00:09:16.773 ' 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.773 05:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.773 05:03:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:24.915 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:24.915 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.915 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:24.916 Found net devices under 0000:af:00.0: cvl_0_0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:24.916 Found net devices under 0000:af:00.1: cvl_0_1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:09:24.916 00:09:24.916 --- 10.0.0.2 ping statistics --- 00:09:24.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.916 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:09:24.916 00:09:24.916 --- 10.0.0.1 ping statistics --- 00:09:24.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.916 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:24.916 only one NIC for nvmf test 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.916 rmmod nvme_tcp 00:09:24.916 rmmod nvme_fabrics 00:09:24.916 rmmod nvme_keyring 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.916 05:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.297 00:09:26.297 real 0m9.821s 00:09:26.297 user 0m2.170s 00:09:26.297 sys 0m5.707s 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.297 ************************************ 00:09:26.297 END TEST nvmf_target_multipath 00:09:26.297 ************************************ 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.297 ************************************ 00:09:26.297 START TEST nvmf_zcopy 00:09:26.297 ************************************ 00:09:26.297 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.558 * Looking for test storage... 00:09:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.558 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.559 --rc genhtml_branch_coverage=1 00:09:26.559 --rc genhtml_function_coverage=1 00:09:26.559 --rc genhtml_legend=1 00:09:26.559 --rc geninfo_all_blocks=1 00:09:26.559 --rc geninfo_unexecuted_blocks=1 00:09:26.559 00:09:26.559 ' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.559 --rc genhtml_branch_coverage=1 00:09:26.559 --rc genhtml_function_coverage=1 00:09:26.559 --rc genhtml_legend=1 00:09:26.559 --rc geninfo_all_blocks=1 00:09:26.559 --rc geninfo_unexecuted_blocks=1 00:09:26.559 00:09:26.559 ' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.559 --rc genhtml_branch_coverage=1 00:09:26.559 --rc genhtml_function_coverage=1 00:09:26.559 --rc genhtml_legend=1 00:09:26.559 --rc geninfo_all_blocks=1 00:09:26.559 --rc geninfo_unexecuted_blocks=1 00:09:26.559 00:09:26.559 ' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.559 --rc genhtml_branch_coverage=1 00:09:26.559 --rc genhtml_function_coverage=1 00:09:26.559 --rc genhtml_legend=1 00:09:26.559 --rc geninfo_all_blocks=1 00:09:26.559 --rc geninfo_unexecuted_blocks=1 00:09:26.559 00:09:26.559 ' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.559 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.560 05:04:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:34.699 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:34.699 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.699 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:34.700 Found net devices under 0000:af:00.0: cvl_0_0 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:34.700 Found net devices under 0000:af:00.1: cvl_0_1 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.700 05:04:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:09:34.700 00:09:34.700 --- 10.0.0.2 ping statistics --- 00:09:34.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.700 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:34.700 00:09:34.700 --- 10.0.0.1 ping statistics --- 00:09:34.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.700 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=352175 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 352175 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 352175 ']' 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.700 05:04:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.700 [2024-12-09 05:04:16.324951] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:09:34.700 [2024-12-09 05:04:16.325002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.700 [2024-12-09 05:04:16.421296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.700 [2024-12-09 05:04:16.457732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.700 [2024-12-09 05:04:16.457767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.700 [2024-12-09 05:04:16.457776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.700 [2024-12-09 05:04:16.457785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.700 [2024-12-09 05:04:16.457808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.700 [2024-12-09 05:04:16.458380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.700 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.700 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:34.700 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.700 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.700 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 [2024-12-09 05:04:17.212387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 [2024-12-09 05:04:17.232612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 malloc0 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.961 { 00:09:34.961 "params": { 00:09:34.961 "name": "Nvme$subsystem", 00:09:34.961 "trtype": "$TEST_TRANSPORT", 00:09:34.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.961 "adrfam": "ipv4", 00:09:34.961 "trsvcid": "$NVMF_PORT", 00:09:34.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.961 "hdgst": ${hdgst:-false}, 00:09:34.961 "ddgst": ${ddgst:-false} 00:09:34.961 }, 00:09:34.961 "method": "bdev_nvme_attach_controller" 00:09:34.961 } 00:09:34.961 EOF 00:09:34.961 )") 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:34.961 05:04:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.961 "params": { 00:09:34.961 "name": "Nvme1", 00:09:34.961 "trtype": "tcp", 00:09:34.961 "traddr": "10.0.0.2", 00:09:34.961 "adrfam": "ipv4", 00:09:34.961 "trsvcid": "4420", 00:09:34.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.961 "hdgst": false, 00:09:34.961 "ddgst": false 00:09:34.961 }, 00:09:34.961 "method": "bdev_nvme_attach_controller" 00:09:34.961 }' 00:09:34.961 [2024-12-09 05:04:17.318297] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:09:34.961 [2024-12-09 05:04:17.318345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352457 ] 00:09:34.961 [2024-12-09 05:04:17.411248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.222 [2024-12-09 05:04:17.451685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.483 Running I/O for 10 seconds... 00:09:37.364 8751.00 IOPS, 68.37 MiB/s [2024-12-09T04:04:21.214Z] 8837.50 IOPS, 69.04 MiB/s [2024-12-09T04:04:22.154Z] 8785.33 IOPS, 68.64 MiB/s [2024-12-09T04:04:23.092Z] 8837.50 IOPS, 69.04 MiB/s [2024-12-09T04:04:24.029Z] 8859.60 IOPS, 69.22 MiB/s [2024-12-09T04:04:24.968Z] 8876.33 IOPS, 69.35 MiB/s [2024-12-09T04:04:25.904Z] 8883.43 IOPS, 69.40 MiB/s [2024-12-09T04:04:26.864Z] 8889.62 IOPS, 69.45 MiB/s [2024-12-09T04:04:27.826Z] 8899.67 IOPS, 69.53 MiB/s [2024-12-09T04:04:27.826Z] 8902.90 IOPS, 69.55 MiB/s 00:09:45.356 Latency(us) 00:09:45.356 [2024-12-09T04:04:27.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.356 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:45.356 Verification LBA range: start 0x0 length 0x1000 00:09:45.356 Nvme1n1 : 10.01 8903.29 69.56 0.00 0.00 14335.34 865.08 21810.38 00:09:45.356 [2024-12-09T04:04:27.826Z] =================================================================================================================== 00:09:45.356 [2024-12-09T04:04:27.826Z] Total : 8903.29 69.56 0.00 0.00 14335.34 865.08 21810.38 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=354318 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.616 { 00:09:45.616 "params": { 00:09:45.616 "name": "Nvme$subsystem", 00:09:45.616 "trtype": "$TEST_TRANSPORT", 00:09:45.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.616 "adrfam": "ipv4", 00:09:45.616 "trsvcid": "$NVMF_PORT", 00:09:45.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.616 "hdgst": ${hdgst:-false}, 00:09:45.616 "ddgst": ${ddgst:-false} 00:09:45.616 }, 00:09:45.616 "method": "bdev_nvme_attach_controller" 00:09:45.616 } 00:09:45.616 EOF 00:09:45.616 )") 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.616 [2024-12-09 05:04:28.016426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.016468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.616 05:04:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.616 "params": { 00:09:45.616 "name": "Nvme1", 00:09:45.616 "trtype": "tcp", 00:09:45.616 "traddr": "10.0.0.2", 00:09:45.616 "adrfam": "ipv4", 00:09:45.616 "trsvcid": "4420", 00:09:45.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.616 "hdgst": false, 00:09:45.616 "ddgst": false 00:09:45.616 }, 00:09:45.616 "method": "bdev_nvme_attach_controller" 00:09:45.616 }' 00:09:45.616 [2024-12-09 05:04:28.028415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.028429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.616 [2024-12-09 05:04:28.040441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.040453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.616 [2024-12-09 05:04:28.052471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.052482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.616 [2024-12-09 05:04:28.055476] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:09:45.616 [2024-12-09 05:04:28.055521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354318 ] 00:09:45.616 [2024-12-09 05:04:28.064504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.064515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.616 [2024-12-09 05:04:28.076535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.616 [2024-12-09 05:04:28.076545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.088569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.088580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.100599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.100610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.112629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.112640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.124660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.124670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.136694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.136706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.147462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.876 [2024-12-09 05:04:28.148728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.148739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.160767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.160786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.172791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.172802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.184823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.184835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.187223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.876 [2024-12-09 05:04:28.196859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.196872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.208899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.208919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.220926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.220941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.232952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.232966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.244985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.244999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.257017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.257030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.269047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.269057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.281099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.281123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.293119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.293134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.305151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.305166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.317180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.317191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.329217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.329228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.876 [2024-12-09 05:04:28.341253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.876 [2024-12-09 05:04:28.341268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.353287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.353303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.365323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.365340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.377367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.377385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 Running I/O for 5 seconds... 00:09:46.136 [2024-12-09 05:04:28.389395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.389406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.404524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.404551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.418769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.418789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.432401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.432422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.446131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.446151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.459714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.459733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.473564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.473588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.486903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.486922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.500324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.500345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.513925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.513944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.527521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.527542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.541092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.541113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.555004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.555025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.568923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.568943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.582648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.582667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.136 [2024-12-09 05:04:28.596623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.136 [2024-12-09 05:04:28.596642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.610414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.610433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.624197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.624223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.637820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.637840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.651153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.651173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.664720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.664739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.678423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.678443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.692183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.692202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.705619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.705638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.719485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.719505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.732855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.732875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.746664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.746683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.759991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.760011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.773737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.773756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.787660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.787679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.801248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.801267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.814872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.814891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.828339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.828362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.842151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.842170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.396 [2024-12-09 05:04:28.855854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.396 [2024-12-09 05:04:28.855873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.869683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.869702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.883476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.883495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.896957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.896976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.910630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.910650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.924485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.924504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.938402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.938423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.952019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.952039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.965890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.965911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.979560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.979580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:28.993398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:28.993419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.007270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.007291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.020665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.020686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.034519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.034539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.048152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.048172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.061972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.061993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.075463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.075482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.089267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.089291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.103214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.103234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.657 [2024-12-09 05:04:29.116845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.657 [2024-12-09 05:04:29.116865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.917 [2024-12-09 05:04:29.130216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.917 [2024-12-09 05:04:29.130236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.917 [2024-12-09 05:04:29.143769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.917 [2024-12-09 05:04:29.143788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.917 [2024-12-09 05:04:29.157596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.917 [2024-12-09 05:04:29.157616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.917 [2024-12-09 05:04:29.171653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.917 [2024-12-09 05:04:29.171673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.917 [2024-12-09 05:04:29.185420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.185439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.198954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.198974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.212651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.212670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.226169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.226189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.239692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.239717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.253358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.253377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.266837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.266856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.281139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.281158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.294656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.294675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.308377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.308396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.321824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.321844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.335080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.335100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.348820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.348846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.362383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.362403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.918 [2024-12-09 05:04:29.376049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.918 [2024-12-09 05:04:29.376069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 17055.00 IOPS, 133.24 MiB/s [2024-12-09T04:04:29.649Z] [2024-12-09 05:04:29.389728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.389747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.403422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.403441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.417029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.417049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.430808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.430827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.444523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.444542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.458136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.458156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.471852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.471871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.485888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.485908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.499530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.499549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.513015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.513034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.526732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.526751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.540506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.540525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.554034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.554053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.567796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.567816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.581464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.581484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.595317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.595336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.609020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.609047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.622474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.622495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.636694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.636713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.179 [2024-12-09 05:04:29.647278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.179 [2024-12-09 05:04:29.647297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.661451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.661471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.674396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.674418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.688890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.688911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.699779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.699798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.713819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.713838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.727783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.727804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.741527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.741546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.754699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.754719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.768335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.768354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.782307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.782327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.796192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.796216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.809995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.810014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.823371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.823390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.836625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.836644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.850305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.850324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.863951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.863971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.877653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.440 [2024-12-09 05:04:29.877672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.440 [2024-12-09 05:04:29.891400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.441 [2024-12-09 05:04:29.891419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.441 [2024-12-09 05:04:29.904986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.441 [2024-12-09 05:04:29.905006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.919048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.919067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.932945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.932964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.946478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.946497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.959853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.959872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.973600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.973619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:29.987282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:29.987302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.000569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.000590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.014448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.014469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.028164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.028190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.044955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.044979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.060597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.060618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.074735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.074755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.088724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.088744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.102397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.102417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.116495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.116514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.130039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.130059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.143922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.143942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.701 [2024-12-09 05:04:30.157910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.701 [2024-12-09 05:04:30.157929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.171752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.171772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.185689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.185708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.199021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.199040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.212716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.212736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.226973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.226992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.242353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.242373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.256029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.256048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.269649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.269668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.283221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.283240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.297190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.297215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.310912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.310932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.324574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.324593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.338175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.338194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.352093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.352112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.365463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.365482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.379197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.379227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 17027.00 IOPS, 133.02 MiB/s [2024-12-09T04:04:30.432Z] [2024-12-09 05:04:30.392959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.392978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.406428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.406447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.962 [2024-12-09 05:04:30.419831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.962 [2024-12-09 05:04:30.419849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.222 [2024-12-09 05:04:30.433763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.222 [2024-12-09 05:04:30.433782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.447227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.447247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.460956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.460975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.474712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.474742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.488428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.488448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.501696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.501715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.515525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.515545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.529340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.529360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.543316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.543338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.556632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.556653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.570420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.570440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.584394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.584414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.598155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.598174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.611902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.611923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.625313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.625333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.638821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.638846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.652654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.652674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.667098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.667118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.223 [2024-12-09 05:04:30.680785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.223 [2024-12-09 05:04:30.680805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.694635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.694655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.707943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.707962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.721611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.721631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.735081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.735101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.748637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.748656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.762339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.762359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.776099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.776119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.789635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.789656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.803247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.803268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.816767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.816786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.830672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.830693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.844291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.844311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.858274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.858294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.872195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.872221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.885654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.885674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.899124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.899149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.912801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.912820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.926312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.926332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.483 [2024-12-09 05:04:30.940001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.483 [2024-12-09 05:04:30.940021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.744 [2024-12-09 05:04:30.953953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.744 [2024-12-09 05:04:30.953973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.744 [2024-12-09 05:04:30.967855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.744 [2024-12-09 05:04:30.967874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.744 [2024-12-09 05:04:30.981487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.744 [2024-12-09 05:04:30.981506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.744 [2024-12-09 05:04:30.994751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:30.994771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.008569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.008588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.022396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.022415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.036199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.036226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.050460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.050479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.066063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.066083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.080119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.080139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.093697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.093717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.107270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.107289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.121911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.121930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.137298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.137317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.150836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.150855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.164527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.164552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.178347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.178366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.192290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.192310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.745 [2024-12-09 05:04:31.206061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.745 [2024-12-09 05:04:31.206080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.027 [2024-12-09 05:04:31.219733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.027 [2024-12-09 05:04:31.219752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.027 [2024-12-09 05:04:31.233204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.027 [2024-12-09 05:04:31.233229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.027 [2024-12-09 05:04:31.247078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.027 [2024-12-09 05:04:31.247097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.027 [2024-12-09 05:04:31.260505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.027 [2024-12-09 05:04:31.260524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.274552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.274572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.288429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.288448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.302077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.302096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.315933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.315951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.329760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.329779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.343329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.343349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.357444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.357463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.371160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.371179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.384790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.384810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 17061.67 IOPS, 133.29 MiB/s [2024-12-09T04:04:31.498Z] [2024-12-09 05:04:31.398285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.398304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.411760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.411779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.425656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.425675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.439330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.439350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.453198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.453224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.466570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.466589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.480250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.480269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.028 [2024-12-09 05:04:31.493894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.028 [2024-12-09 05:04:31.493913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.507092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.507112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.520902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.520920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.534830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.534849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.548980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.548998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.562740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.562759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.576572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.576591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.590427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.590446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.604433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.604453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.618266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.618285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.631966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.631985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.646223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.646243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.661758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.661778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.675840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.675859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.689486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.689505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.703279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.703297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.717029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.717048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.730432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.730451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.744114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.744132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.289 [2024-12-09 05:04:31.757770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.289 [2024-12-09 05:04:31.757790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.771311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.771330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.785214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.785249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.798771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.798789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.812189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.812212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.826149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.826167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.839839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.839859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.853485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.853504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.867204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.548 [2024-12-09 05:04:31.867228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.548 [2024-12-09 05:04:31.880875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.880895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.894383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.894402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.908047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.908067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.921927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.921948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.935263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.935286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.949018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.949037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.962426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.962445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.975669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.975688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:31.989122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:31.989141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:32.002719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:32.002739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.549 [2024-12-09 05:04:32.016294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.549 [2024-12-09 05:04:32.016313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.029906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.029927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.044324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.044344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.057728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.057748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.071249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.071270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.084893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.084913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.099004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.099026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.112334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.112354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.125889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.125909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.139030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.139049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.152415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.152435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.166401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.166421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.179832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.179852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.193640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.193664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.207160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.207179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.220804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.220824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.234412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.234432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.248014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.248033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.262213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.262232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.808 [2024-12-09 05:04:32.273068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.808 [2024-12-09 05:04:32.273088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.287462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.287481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.301044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.301064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.314936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.314956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.328479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.328498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.341993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.342012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.355430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.355450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.369235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.369253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.383479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.383498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 17089.25 IOPS, 133.51 MiB/s [2024-12-09T04:04:32.537Z] [2024-12-09 05:04:32.399512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.399531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.413142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.413162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.426879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.426898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.440692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.440712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.454413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.454436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.468571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.468592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.067 [2024-12-09 05:04:32.479699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.067 [2024-12-09 05:04:32.479719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.068 [2024-12-09 05:04:32.493750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.068 [2024-12-09 05:04:32.493770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.068 [2024-12-09 05:04:32.507063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.068 [2024-12-09 05:04:32.507083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.068 [2024-12-09 05:04:32.520887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.068 [2024-12-09 05:04:32.520907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.068 [2024-12-09 05:04:32.534310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.068 [2024-12-09 05:04:32.534339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.548152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.548172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.561718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.561737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.575495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.575514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.588907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.588927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.602309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.602328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.616140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.616162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.629733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.629752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.643415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.643434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.657416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.657435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.671328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.671347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.685201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.685226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.698824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.698843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.712774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.712797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.726358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.726377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.739821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.739841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.753321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.753340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.766651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.766669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.780453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.780472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.327 [2024-12-09 05:04:32.794354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.327 [2024-12-09 05:04:32.794373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.807813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.807832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.821926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.821945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.835767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.835786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.849412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.849432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.863159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.863179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.876813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.876831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.890475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.890494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.904319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.904339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.917934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.917953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.930964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.930983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.944483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.944502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.958375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.958395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.971969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.971988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.985542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.985562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:32.999184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:32.999204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:33.012794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:33.012812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:33.026812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:33.026831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:33.040319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:33.040339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.587 [2024-12-09 05:04:33.053774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.587 [2024-12-09 05:04:33.053794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.846 [2024-12-09 05:04:33.067466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.067485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.081385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.081403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.094993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.095013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.108562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.108583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.121909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.121929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.135583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.135603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.149175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.149195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.162821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.162841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.176388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.176407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.190228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.190247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.204031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.204050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.217984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.218004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.232122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.232141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.245923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.245942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.259421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.259439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.272988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.273007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.287140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.287158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.300926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.300944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.847 [2024-12-09 05:04:33.314950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.847 [2024-12-09 05:04:33.314969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.328706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.328725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.342413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.342432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.355715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.355735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.370163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.370181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.385109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.385128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 17097.00 IOPS, 133.57 MiB/s [2024-12-09T04:04:33.578Z] [2024-12-09 05:04:33.399412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.399432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 00:09:51.108 Latency(us) 00:09:51.108 [2024-12-09T04:04:33.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.108 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:51.108 Nvme1n1 : 5.01 17099.39 133.59 0.00 0.00 7478.93 3565.16 17091.79 00:09:51.108 [2024-12-09T04:04:33.578Z] =================================================================================================================== 00:09:51.108 [2024-12-09T04:04:33.578Z] Total : 17099.39 133.59 0.00 0.00 7478.93 3565.16 17091.79 00:09:51.108 [2024-12-09 05:04:33.409205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.409226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.421236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.421251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.433284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.433312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.445302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.445321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.457329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.457344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.469358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.469371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.481387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.481402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.493428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.493442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.505454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.505468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.517485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.517498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.529515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.529525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.541553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.541566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.553581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.553591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.108 [2024-12-09 05:04:33.565613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.108 [2024-12-09 05:04:33.565626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.368 [2024-12-09 05:04:33.577644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.368 [2024-12-09 05:04:33.577655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.368 [2024-12-09 05:04:33.589674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.368 [2024-12-09 05:04:33.589685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.368 [2024-12-09 05:04:33.601706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.368 [2024-12-09 05:04:33.601718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (354318) - No such process 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 354318 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 delay0 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 05:04:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.368 [2024-12-09 05:04:33.768032] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.942 Initializing NVMe Controllers 00:09:57.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.942 Initialization complete. Launching workers. 00:09:57.942 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 271 00:09:57.942 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 558, failed to submit 33 00:09:57.942 success 369, unsuccessful 189, failed 0 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.942 rmmod nvme_tcp 00:09:57.942 rmmod nvme_fabrics 00:09:57.942 rmmod nvme_keyring 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 352175 ']' 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 352175 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 352175 ']' 00:09:57.942 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 352175 00:09:57.943 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:57.943 05:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352175 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352175' 00:09:57.943 killing process with pid 352175 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 352175 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 352175 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.943 05:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.486 00:10:00.486 real 0m33.669s 00:10:00.486 user 0m43.925s 00:10:00.486 sys 0m12.579s 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.486 ************************************ 00:10:00.486 END TEST nvmf_zcopy 00:10:00.486 ************************************ 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.486 ************************************ 00:10:00.486 START TEST nvmf_nmic 00:10:00.486 ************************************ 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.486 * Looking for test storage... 00:10:00.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.486 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.487 --rc genhtml_branch_coverage=1 00:10:00.487 --rc genhtml_function_coverage=1 00:10:00.487 --rc genhtml_legend=1 00:10:00.487 --rc geninfo_all_blocks=1 00:10:00.487 --rc geninfo_unexecuted_blocks=1 00:10:00.487 00:10:00.487 ' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.487 --rc genhtml_branch_coverage=1 00:10:00.487 --rc genhtml_function_coverage=1 00:10:00.487 --rc genhtml_legend=1 00:10:00.487 --rc geninfo_all_blocks=1 00:10:00.487 --rc geninfo_unexecuted_blocks=1 00:10:00.487 00:10:00.487 ' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.487 --rc genhtml_branch_coverage=1 00:10:00.487 --rc genhtml_function_coverage=1 00:10:00.487 --rc genhtml_legend=1 00:10:00.487 --rc geninfo_all_blocks=1 00:10:00.487 --rc geninfo_unexecuted_blocks=1 00:10:00.487 00:10:00.487 ' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.487 --rc genhtml_branch_coverage=1 00:10:00.487 --rc genhtml_function_coverage=1 00:10:00.487 --rc genhtml_legend=1 00:10:00.487 --rc geninfo_all_blocks=1 00:10:00.487 --rc geninfo_unexecuted_blocks=1 00:10:00.487 00:10:00.487 ' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.487 05:04:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.619 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:08.620 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:08.620 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:08.620 Found net devices under 0000:af:00.0: cvl_0_0 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:08.620 Found net devices under 0000:af:00.1: cvl_0_1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.620 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:10:08.620 00:10:08.620 --- 10.0.0.2 ping statistics --- 00:10:08.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.621 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:10:08.621 00:10:08.621 --- 10.0.0.1 ping statistics --- 00:10:08.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.621 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.621 05:04:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=360150 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 360150 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 360150 ']' 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-12-09 05:04:50.076279] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:10:08.621 [2024-12-09 05:04:50.076337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.621 [2024-12-09 05:04:50.175090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.621 [2024-12-09 05:04:50.218912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.621 [2024-12-09 05:04:50.218948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.621 [2024-12-09 05:04:50.218958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.621 [2024-12-09 05:04:50.218967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.621 [2024-12-09 05:04:50.218975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.621 [2024-12-09 05:04:50.220688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.621 [2024-12-09 05:04:50.220796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.621 [2024-12-09 05:04:50.220903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.621 [2024-12-09 05:04:50.220905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-12-09 05:04:50.972697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 Malloc0 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-12-09 05:04:51.038608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:08.621 test case1: single bdev can't be used in multiple subsystems 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-12-09 05:04:51.066464] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:08.621 [2024-12-09 05:04:51.066484] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:08.621 [2024-12-09 05:04:51.066494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 request: 00:10:08.621 { 00:10:08.621 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:08.621 "namespace": { 00:10:08.621 "bdev_name": "Malloc0", 00:10:08.621 "no_auto_visible": false, 00:10:08.621 "hide_metadata": false 00:10:08.621 }, 00:10:08.621 "method": "nvmf_subsystem_add_ns", 00:10:08.621 "req_id": 1 00:10:08.621 } 00:10:08.621 Got JSON-RPC error response 00:10:08.621 response: 00:10:08.621 { 00:10:08.621 "code": -32602, 00:10:08.621 "message": "Invalid parameters" 00:10:08.621 } 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:08.621 Adding namespace failed - expected result. 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:08.621 test case2: host connect to nvmf target in multiple paths 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.621 [2024-12-09 05:04:51.082641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.621 05:04:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.003 05:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:11.381 05:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.381 05:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.381 05:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.381 05:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.381 05:04:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:13.919 05:04:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.919 [global] 00:10:13.919 thread=1 00:10:13.919 invalidate=1 00:10:13.919 rw=write 00:10:13.919 time_based=1 00:10:13.919 runtime=1 00:10:13.919 ioengine=libaio 00:10:13.919 direct=1 00:10:13.919 bs=4096 00:10:13.919 iodepth=1 00:10:13.919 norandommap=0 00:10:13.919 numjobs=1 00:10:13.919 00:10:13.919 verify_dump=1 00:10:13.919 verify_backlog=512 00:10:13.919 verify_state_save=0 00:10:13.919 do_verify=1 00:10:13.919 verify=crc32c-intel 00:10:13.919 [job0] 00:10:13.919 filename=/dev/nvme0n1 00:10:13.919 Could not set queue depth (nvme0n1) 00:10:14.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.178 fio-3.35 00:10:14.178 Starting 1 thread 00:10:15.121 00:10:15.121 job0: (groupid=0, jobs=1): err= 0: pid=361396: Mon Dec 9 05:04:57 2024 00:10:15.121 read: IOPS=2484, BW=9938KiB/s (10.2MB/s)(9948KiB/1001msec) 00:10:15.121 slat (nsec): min=8725, max=46424, avg=9505.34, stdev=1661.87 00:10:15.121 clat (usec): min=173, max=420, avg=227.01, stdev=26.75 00:10:15.121 lat (usec): min=190, max=428, avg=236.52, stdev=26.78 00:10:15.121 clat percentiles (usec): 00:10:15.121 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 202], 00:10:15.121 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 243], 00:10:15.121 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 265], 00:10:15.121 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 400], 99.95th=[ 412], 00:10:15.121 | 99.99th=[ 420] 00:10:15.121 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:15.121 slat (nsec): min=11717, max=50033, avg=12730.25, stdev=1984.32 00:10:15.121 clat (usec): min=111, max=265, avg=142.62, stdev= 8.29 00:10:15.121 lat (usec): min=127, max=314, avg=155.35, stdev= 8.72 00:10:15.121 clat percentiles (usec): 00:10:15.121 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:10:15.121 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 143], 00:10:15.121 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 149], 95.00th=[ 153], 00:10:15.121 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 239], 00:10:15.121 | 99.99th=[ 265] 00:10:15.121 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.121 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.121 lat (usec) : 250=84.09%, 500=15.91% 00:10:15.121 cpu : usr=4.50%, sys=8.60%, ctx=5047, majf=0, minf=1 00:10:15.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.121 issued rwts: total=2487,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.121 00:10:15.121 Run status group 0 (all jobs): 00:10:15.121 READ: bw=9938KiB/s (10.2MB/s), 9938KiB/s-9938KiB/s (10.2MB/s-10.2MB/s), io=9948KiB (10.2MB), run=1001-1001msec 00:10:15.121 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:15.121 00:10:15.121 Disk stats (read/write): 00:10:15.121 nvme0n1: ios=2104/2560, merge=0/0, ticks=451/349, in_queue=800, util=91.28% 00:10:15.121 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.381 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.381 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.382 rmmod nvme_tcp 00:10:15.382 rmmod nvme_fabrics 00:10:15.382 rmmod nvme_keyring 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 360150 ']' 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 360150 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 360150 ']' 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 360150 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.382 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360150 00:10:15.642 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.642 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.642 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360150' 00:10:15.642 killing process with pid 360150 00:10:15.642 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 360150 00:10:15.642 05:04:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 360150 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.903 05:04:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.815 00:10:17.815 real 0m17.756s 00:10:17.815 user 0m41.524s 00:10:17.815 sys 0m6.989s 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.815 ************************************ 00:10:17.815 END TEST nvmf_nmic 00:10:17.815 ************************************ 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.815 05:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.075 ************************************ 00:10:18.075 START TEST nvmf_fio_target 00:10:18.075 ************************************ 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:18.075 * Looking for test storage... 00:10:18.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.075 --rc genhtml_branch_coverage=1 00:10:18.075 --rc genhtml_function_coverage=1 00:10:18.075 --rc genhtml_legend=1 00:10:18.075 --rc geninfo_all_blocks=1 00:10:18.075 --rc geninfo_unexecuted_blocks=1 00:10:18.075 00:10:18.075 ' 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.075 --rc genhtml_branch_coverage=1 00:10:18.075 --rc genhtml_function_coverage=1 00:10:18.075 --rc genhtml_legend=1 00:10:18.075 --rc geninfo_all_blocks=1 00:10:18.075 --rc geninfo_unexecuted_blocks=1 00:10:18.075 00:10:18.075 ' 00:10:18.075 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.076 --rc genhtml_branch_coverage=1 00:10:18.076 --rc genhtml_function_coverage=1 00:10:18.076 --rc genhtml_legend=1 00:10:18.076 --rc geninfo_all_blocks=1 00:10:18.076 --rc geninfo_unexecuted_blocks=1 00:10:18.076 00:10:18.076 ' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.076 --rc genhtml_branch_coverage=1 00:10:18.076 --rc genhtml_function_coverage=1 00:10:18.076 --rc genhtml_legend=1 00:10:18.076 --rc geninfo_all_blocks=1 00:10:18.076 --rc geninfo_unexecuted_blocks=1 00:10:18.076 00:10:18.076 ' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.076 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.335 05:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:26.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:26.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:26.461 Found net devices under 0000:af:00.0: cvl_0_0 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:26.461 Found net devices under 0000:af:00.1: cvl_0_1 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.461 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:10:26.462 00:10:26.462 --- 10.0.0.2 ping statistics --- 00:10:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.462 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:26.462 00:10:26.462 --- 10.0.0.1 ping statistics --- 00:10:26.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.462 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=365381 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 365381 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 365381 ']' 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.462 05:05:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.462 [2024-12-09 05:05:07.889639] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:10:26.462 [2024-12-09 05:05:07.889692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.462 [2024-12-09 05:05:07.987428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.462 [2024-12-09 05:05:08.027356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.462 [2024-12-09 05:05:08.027396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.462 [2024-12-09 05:05:08.027405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.462 [2024-12-09 05:05:08.027414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.462 [2024-12-09 05:05:08.027421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.462 [2024-12-09 05:05:08.029144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.462 [2024-12-09 05:05:08.029303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.462 [2024-12-09 05:05:08.029305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.462 [2024-12-09 05:05:08.029267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.462 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:26.722 [2024-12-09 05:05:08.942858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.722 05:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.982 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:26.982 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.982 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:26.982 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.242 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:27.242 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.502 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:27.502 05:05:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:27.762 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.022 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:28.022 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.022 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:28.022 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.281 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:28.281 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:28.540 05:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.799 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:28.799 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.799 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:28.799 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.059 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.318 [2024-12-09 05:05:11.641038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.318 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:29.578 05:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:29.837 05:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:31.216 05:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:33.124 05:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.124 [global] 00:10:33.124 thread=1 00:10:33.124 invalidate=1 00:10:33.124 rw=write 00:10:33.124 time_based=1 00:10:33.124 runtime=1 00:10:33.124 ioengine=libaio 00:10:33.124 direct=1 00:10:33.124 bs=4096 00:10:33.124 iodepth=1 00:10:33.124 norandommap=0 00:10:33.124 numjobs=1 00:10:33.124 00:10:33.124 verify_dump=1 00:10:33.124 verify_backlog=512 00:10:33.124 verify_state_save=0 00:10:33.124 do_verify=1 00:10:33.124 verify=crc32c-intel 00:10:33.125 [job0] 00:10:33.125 filename=/dev/nvme0n1 00:10:33.125 [job1] 00:10:33.125 filename=/dev/nvme0n2 00:10:33.125 [job2] 00:10:33.125 filename=/dev/nvme0n3 00:10:33.125 [job3] 00:10:33.125 filename=/dev/nvme0n4 00:10:33.125 Could not set queue depth (nvme0n1) 00:10:33.125 Could not set queue depth (nvme0n2) 00:10:33.125 Could not set queue depth (nvme0n3) 00:10:33.125 Could not set queue depth (nvme0n4) 00:10:33.383 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.383 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.383 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.383 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.383 fio-3.35 00:10:33.383 Starting 4 threads 00:10:34.765 00:10:34.765 job0: (groupid=0, jobs=1): err= 0: pid=366937: Mon Dec 9 05:05:17 2024 00:10:34.765 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:10:34.765 slat (nsec): min=11524, max=25353, avg=23852.30, stdev=2724.52 00:10:34.765 clat (usec): min=40854, max=41366, avg=40987.10, stdev=95.58 00:10:34.765 lat (usec): min=40880, max=41378, avg=41010.95, stdev=93.13 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:34.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.765 | 99.99th=[41157] 00:10:34.765 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:34.765 slat (nsec): min=11299, max=49360, avg=12404.77, stdev=2221.40 00:10:34.765 clat (usec): min=129, max=377, avg=160.40, stdev=23.14 00:10:34.765 lat (usec): min=141, max=416, avg=172.81, stdev=23.87 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:34.765 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:34.765 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:10:34.765 | 99.00th=[ 198], 99.50th=[ 347], 99.90th=[ 379], 99.95th=[ 379], 00:10:34.765 | 99.99th=[ 379] 00:10:34.765 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.765 lat (usec) : 250=94.77%, 500=0.93% 00:10:34.765 lat (msec) : 50=4.30% 00:10:34.765 cpu : usr=0.58%, sys=0.39%, ctx=535, majf=0, minf=1 00:10:34.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.765 job1: (groupid=0, jobs=1): err= 0: pid=366943: Mon Dec 9 05:05:17 2024 00:10:34.765 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:10:34.765 slat (nsec): min=11652, max=26340, avg=24073.70, stdev=2745.90 00:10:34.765 clat (usec): min=40826, max=41389, avg=40988.38, stdev=103.81 00:10:34.765 lat (usec): min=40851, max=41401, avg=41012.45, stdev=101.60 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:34.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.765 | 99.99th=[41157] 00:10:34.765 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:34.765 slat (usec): min=12, max=988, avg=15.69, stdev=43.14 00:10:34.765 clat (usec): min=130, max=235, avg=158.91, stdev=12.94 00:10:34.765 lat (usec): min=143, max=1149, avg=174.60, stdev=45.25 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:34.765 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:34.765 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:10:34.765 | 99.00th=[ 194], 99.50th=[ 196], 99.90th=[ 237], 99.95th=[ 237], 00:10:34.765 | 99.99th=[ 237] 00:10:34.765 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.765 lat (usec) : 250=95.70% 00:10:34.765 lat (msec) : 50=4.30% 00:10:34.765 cpu : usr=0.39%, sys=1.06%, ctx=539, majf=0, minf=1 00:10:34.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.765 job2: (groupid=0, jobs=1): err= 0: pid=366957: Mon Dec 9 05:05:17 2024 00:10:34.765 read: IOPS=25, BW=104KiB/s (106kB/s)(108KiB/1041msec) 00:10:34.765 slat (nsec): min=10037, max=25870, avg=21346.81, stdev=5162.08 00:10:34.765 clat (usec): min=272, max=41072, avg=34948.49, stdev=14728.97 00:10:34.765 lat (usec): min=297, max=41096, avg=34969.84, stdev=14731.89 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[41157], 00:10:34.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.765 | 99.99th=[41157] 00:10:34.765 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:34.765 slat (nsec): min=12165, max=48182, avg=13686.05, stdev=2692.65 00:10:34.765 clat (usec): min=138, max=341, avg=172.27, stdev=19.51 00:10:34.765 lat (usec): min=151, max=389, avg=185.95, stdev=20.50 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:10:34.765 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:34.765 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 206], 00:10:34.765 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 343], 99.95th=[ 343], 00:10:34.765 | 99.99th=[ 343] 00:10:34.765 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.765 lat (usec) : 250=94.25%, 500=1.48% 00:10:34.765 lat (msec) : 50=4.27% 00:10:34.765 cpu : usr=0.67%, sys=0.77%, ctx=539, majf=0, minf=1 00:10:34.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.765 job3: (groupid=0, jobs=1): err= 0: pid=366959: Mon Dec 9 05:05:17 2024 00:10:34.765 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:10:34.765 slat (nsec): min=11721, max=26785, avg=23879.43, stdev=2880.85 00:10:34.765 clat (usec): min=40484, max=41063, avg=40947.21, stdev=113.95 00:10:34.765 lat (usec): min=40495, max=41089, avg=40971.09, stdev=116.55 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:34.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.765 | 99.99th=[41157] 00:10:34.765 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:34.765 slat (usec): min=12, max=40673, avg=165.55, stdev=2426.44 00:10:34.765 clat (usec): min=126, max=607, avg=153.98, stdev=30.39 00:10:34.765 lat (usec): min=139, max=40982, avg=319.53, stdev=2437.50 00:10:34.765 clat percentiles (usec): 00:10:34.765 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:10:34.765 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:10:34.765 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 198], 00:10:34.765 | 99.00th=[ 255], 99.50th=[ 310], 99.90th=[ 611], 99.95th=[ 611], 00:10:34.765 | 99.99th=[ 611] 00:10:34.765 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.765 lat (usec) : 250=94.93%, 500=0.94%, 750=0.19% 00:10:34.765 lat (msec) : 50=3.94% 00:10:34.765 cpu : usr=0.97%, sys=0.49%, ctx=537, majf=0, minf=1 00:10:34.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.765 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.765 00:10:34.765 Run status group 0 (all jobs): 00:10:34.765 READ: bw=361KiB/s (370kB/s), 81.8KiB/s-104KiB/s (83.8kB/s-106kB/s), io=376KiB (385kB), run=1027-1041msec 00:10:34.766 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-1994KiB/s (2015kB/s-2042kB/s), io=8192KiB (8389kB), run=1027-1041msec 00:10:34.766 00:10:34.766 Disk stats (read/write): 00:10:34.766 nvme0n1: ios=67/512, merge=0/0, ticks=718/81, in_queue=799, util=83.47% 00:10:34.766 nvme0n2: ios=67/512, merge=0/0, ticks=1003/75, in_queue=1078, util=87.06% 00:10:34.766 nvme0n3: ios=78/512, merge=0/0, ticks=769/86, in_queue=855, util=91.64% 00:10:34.766 nvme0n4: ios=38/512, merge=0/0, ticks=1518/70, in_queue=1588, util=98.70% 00:10:34.766 05:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:34.766 [global] 00:10:34.766 thread=1 00:10:34.766 invalidate=1 00:10:34.766 rw=randwrite 00:10:34.766 time_based=1 00:10:34.766 runtime=1 00:10:34.766 ioengine=libaio 00:10:34.766 direct=1 00:10:34.766 bs=4096 00:10:34.766 iodepth=1 00:10:34.766 norandommap=0 00:10:34.766 numjobs=1 00:10:34.766 00:10:34.766 verify_dump=1 00:10:34.766 verify_backlog=512 00:10:34.766 verify_state_save=0 00:10:34.766 do_verify=1 00:10:34.766 verify=crc32c-intel 00:10:34.766 [job0] 00:10:34.766 filename=/dev/nvme0n1 00:10:34.766 [job1] 00:10:34.766 filename=/dev/nvme0n2 00:10:34.766 [job2] 00:10:34.766 filename=/dev/nvme0n3 00:10:34.766 [job3] 00:10:34.766 filename=/dev/nvme0n4 00:10:34.766 Could not set queue depth (nvme0n1) 00:10:34.766 Could not set queue depth (nvme0n2) 00:10:34.766 Could not set queue depth (nvme0n3) 00:10:34.766 Could not set queue depth (nvme0n4) 00:10:35.025 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.025 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.025 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.025 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.025 fio-3.35 00:10:35.025 Starting 4 threads 00:10:36.405 00:10:36.405 job0: (groupid=0, jobs=1): err= 0: pid=367374: Mon Dec 9 05:05:18 2024 00:10:36.405 read: IOPS=2348, BW=9395KiB/s (9620kB/s)(9404KiB/1001msec) 00:10:36.405 slat (nsec): min=8612, max=32580, avg=9330.42, stdev=1094.32 00:10:36.405 clat (usec): min=173, max=456, avg=226.78, stdev=18.06 00:10:36.405 lat (usec): min=183, max=465, avg=236.11, stdev=18.12 00:10:36.405 clat percentiles (usec): 00:10:36.405 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:10:36.405 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:10:36.405 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:10:36.405 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 343], 00:10:36.405 | 99.99th=[ 457] 00:10:36.405 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:36.405 slat (usec): min=11, max=17210, avg=19.35, stdev=339.91 00:10:36.405 clat (usec): min=115, max=335, avg=149.94, stdev=12.76 00:10:36.405 lat (usec): min=127, max=17545, avg=169.29, stdev=343.79 00:10:36.405 clat percentiles (usec): 00:10:36.405 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:36.405 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:10:36.405 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:10:36.405 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 285], 99.95th=[ 297], 00:10:36.405 | 99.99th=[ 334] 00:10:36.405 bw ( KiB/s): min=11320, max=11320, per=40.51%, avg=11320.00, stdev= 0.00, samples=1 00:10:36.405 iops : min= 2830, max= 2830, avg=2830.00, stdev= 0.00, samples=1 00:10:36.405 lat (usec) : 250=95.11%, 500=4.89% 00:10:36.405 cpu : usr=1.90%, sys=6.90%, ctx=4914, majf=0, minf=1 00:10:36.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.405 issued rwts: total=2351,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.405 job1: (groupid=0, jobs=1): err= 0: pid=367387: Mon Dec 9 05:05:18 2024 00:10:36.405 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:10:36.405 slat (nsec): min=11585, max=26296, avg=24925.41, stdev=2991.65 00:10:36.405 clat (usec): min=40669, max=41084, avg=40951.00, stdev=90.09 00:10:36.405 lat (usec): min=40681, max=41109, avg=40975.93, stdev=92.14 00:10:36.405 clat percentiles (usec): 00:10:36.405 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:36.405 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.405 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.405 | 99.99th=[41157] 00:10:36.405 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:36.405 slat (nsec): min=11640, max=46975, avg=12952.22, stdev=1940.78 00:10:36.405 clat (usec): min=150, max=322, avg=225.76, stdev=22.27 00:10:36.405 lat (usec): min=163, max=369, avg=238.71, stdev=22.47 00:10:36.405 clat percentiles (usec): 00:10:36.405 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 210], 00:10:36.406 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:10:36.406 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 258], 00:10:36.406 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 322], 00:10:36.406 | 99.99th=[ 322] 00:10:36.406 bw ( KiB/s): min= 4096, max= 4096, per=14.66%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.406 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.406 lat (usec) : 250=87.27%, 500=8.61% 00:10:36.406 lat (msec) : 50=4.12% 00:10:36.406 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=1 00:10:36.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.406 job2: (groupid=0, jobs=1): err= 0: pid=367409: Mon Dec 9 05:05:18 2024 00:10:36.406 read: IOPS=1817, BW=7269KiB/s (7444kB/s)(7400KiB/1018msec) 00:10:36.406 slat (nsec): min=4046, max=25750, avg=8873.26, stdev=1982.31 00:10:36.406 clat (usec): min=193, max=40984, avg=317.79, stdev=948.42 00:10:36.406 lat (usec): min=203, max=40989, avg=326.66, stdev=948.34 00:10:36.406 clat percentiles (usec): 00:10:36.406 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 247], 00:10:36.406 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:10:36.406 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 469], 00:10:36.406 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 660], 99.95th=[41157], 00:10:36.406 | 99.99th=[41157] 00:10:36.406 write: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec); 0 zone resets 00:10:36.406 slat (nsec): min=11688, max=44556, avg=12888.72, stdev=1598.18 00:10:36.406 clat (usec): min=123, max=380, avg=184.86, stdev=44.08 00:10:36.406 lat (usec): min=136, max=396, avg=197.75, stdev=44.14 00:10:36.406 clat percentiles (usec): 00:10:36.406 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:10:36.406 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:10:36.406 | 70.00th=[ 194], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 265], 00:10:36.406 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 367], 99.95th=[ 371], 00:10:36.406 | 99.99th=[ 379] 00:10:36.406 bw ( KiB/s): min= 8192, max= 8192, per=29.31%, avg=8192.00, stdev= 0.00, samples=2 00:10:36.406 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:36.406 lat (usec) : 250=59.26%, 500=39.99%, 750=0.72% 00:10:36.406 lat (msec) : 50=0.03% 00:10:36.406 cpu : usr=2.65%, sys=4.23%, ctx=3899, majf=0, minf=1 00:10:36.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 issued rwts: total=1850,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.406 job3: (groupid=0, jobs=1): err= 0: pid=367421: Mon Dec 9 05:05:18 2024 00:10:36.406 read: IOPS=1499, BW=5998KiB/s (6142kB/s)(6148KiB/1025msec) 00:10:36.406 slat (nsec): min=9170, max=43467, avg=10090.73, stdev=1829.40 00:10:36.406 clat (usec): min=215, max=41346, avg=379.66, stdev=1049.16 00:10:36.406 lat (usec): min=225, max=41357, avg=389.75, stdev=1049.19 00:10:36.406 clat percentiles (usec): 00:10:36.406 | 1.00th=[ 231], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 277], 00:10:36.406 | 30.00th=[ 289], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 347], 00:10:36.406 | 70.00th=[ 388], 80.00th=[ 461], 90.00th=[ 498], 95.00th=[ 506], 00:10:36.406 | 99.00th=[ 519], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[41157], 00:10:36.406 | 99.99th=[41157] 00:10:36.406 write: IOPS=1998, BW=7992KiB/s (8184kB/s)(8192KiB/1025msec); 0 zone resets 00:10:36.406 slat (nsec): min=5299, max=50242, avg=13850.05, stdev=3726.34 00:10:36.406 clat (usec): min=127, max=311, avg=189.12, stdev=34.12 00:10:36.406 lat (usec): min=135, max=324, avg=202.97, stdev=35.16 00:10:36.406 clat percentiles (usec): 00:10:36.406 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:10:36.406 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 188], 00:10:36.406 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 247], 00:10:36.406 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 310], 00:10:36.406 | 99.99th=[ 314] 00:10:36.406 bw ( KiB/s): min= 8192, max= 8192, per=29.31%, avg=8192.00, stdev= 0.00, samples=2 00:10:36.406 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:36.406 lat (usec) : 250=57.13%, 500=39.41%, 750=3.43% 00:10:36.406 lat (msec) : 50=0.03% 00:10:36.406 cpu : usr=2.54%, sys=6.93%, ctx=3585, majf=0, minf=2 00:10:36.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.406 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.406 00:10:36.406 Run status group 0 (all jobs): 00:10:36.406 READ: bw=21.9MiB/s (23.0MB/s), 85.8KiB/s-9395KiB/s (87.8kB/s-9620kB/s), io=22.5MiB (23.6MB), run=1001-1026msec 00:10:36.406 WRITE: bw=27.3MiB/s (28.6MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1026msec 00:10:36.406 00:10:36.406 Disk stats (read/write): 00:10:36.406 nvme0n1: ios=1983/2048, merge=0/0, ticks=1417/295, in_queue=1712, util=99.60% 00:10:36.406 nvme0n2: ios=47/512, merge=0/0, ticks=1682/110, in_queue=1792, util=99.69% 00:10:36.406 nvme0n3: ios=1561/1755, merge=0/0, ticks=755/314, in_queue=1069, util=97.34% 00:10:36.406 nvme0n4: ios=1384/1536, merge=0/0, ticks=476/274, in_queue=750, util=89.31% 00:10:36.406 05:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:36.406 [global] 00:10:36.406 thread=1 00:10:36.406 invalidate=1 00:10:36.406 rw=write 00:10:36.406 time_based=1 00:10:36.406 runtime=1 00:10:36.406 ioengine=libaio 00:10:36.406 direct=1 00:10:36.406 bs=4096 00:10:36.406 iodepth=128 00:10:36.406 norandommap=0 00:10:36.406 numjobs=1 00:10:36.406 00:10:36.406 verify_dump=1 00:10:36.406 verify_backlog=512 00:10:36.406 verify_state_save=0 00:10:36.406 do_verify=1 00:10:36.406 verify=crc32c-intel 00:10:36.406 [job0] 00:10:36.406 filename=/dev/nvme0n1 00:10:36.406 [job1] 00:10:36.406 filename=/dev/nvme0n2 00:10:36.406 [job2] 00:10:36.406 filename=/dev/nvme0n3 00:10:36.406 [job3] 00:10:36.406 filename=/dev/nvme0n4 00:10:36.406 Could not set queue depth (nvme0n1) 00:10:36.406 Could not set queue depth (nvme0n2) 00:10:36.406 Could not set queue depth (nvme0n3) 00:10:36.406 Could not set queue depth (nvme0n4) 00:10:36.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.665 fio-3.35 00:10:36.665 Starting 4 threads 00:10:38.047 00:10:38.047 job0: (groupid=0, jobs=1): err= 0: pid=367846: Mon Dec 9 05:05:20 2024 00:10:38.047 read: IOPS=2275, BW=9101KiB/s (9320kB/s)(9520KiB/1046msec) 00:10:38.047 slat (usec): min=3, max=21048, avg=178.08, stdev=1371.61 00:10:38.047 clat (usec): min=7873, max=57964, avg=24406.58, stdev=11943.88 00:10:38.047 lat (usec): min=7886, max=57973, avg=24584.66, stdev=12049.21 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[11207], 20.00th=[13960], 00:10:38.047 | 30.00th=[15926], 40.00th=[16581], 50.00th=[19530], 60.00th=[26870], 00:10:38.047 | 70.00th=[32375], 80.00th=[35390], 90.00th=[39060], 95.00th=[46924], 00:10:38.047 | 99.00th=[53216], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:10:38.047 | 99.99th=[57934] 00:10:38.047 write: IOPS=2447, BW=9790KiB/s (10.0MB/s)(10.0MiB/1046msec); 0 zone resets 00:10:38.047 slat (usec): min=2, max=37316, avg=213.60, stdev=1486.60 00:10:38.047 clat (usec): min=1741, max=168294, avg=27292.48, stdev=32813.83 00:10:38.047 lat (usec): min=1784, max=168313, avg=27506.08, stdev=33034.80 00:10:38.047 clat percentiles (msec): 00:10:38.047 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:10:38.047 | 30.00th=[ 11], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 22], 00:10:38.047 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 63], 95.00th=[ 116], 00:10:38.047 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 169], 00:10:38.047 | 99.99th=[ 169] 00:10:38.047 bw ( KiB/s): min= 8192, max=12288, per=16.43%, avg=10240.00, stdev=2896.31, samples=2 00:10:38.047 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:38.047 lat (msec) : 2=0.22%, 4=0.40%, 10=17.73%, 20=36.03%, 50=38.26% 00:10:38.047 lat (msec) : 100=4.01%, 250=3.34% 00:10:38.047 cpu : usr=2.39%, sys=5.07%, ctx=158, majf=0, minf=1 00:10:38.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:38.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.047 issued rwts: total=2380,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.047 job1: (groupid=0, jobs=1): err= 0: pid=367861: Mon Dec 9 05:05:20 2024 00:10:38.047 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:10:38.047 slat (nsec): min=1757, max=14450k, avg=76129.15, stdev=569805.09 00:10:38.047 clat (usec): min=1698, max=28721, avg=10341.27, stdev=4156.41 00:10:38.047 lat (usec): min=1705, max=30232, avg=10417.40, stdev=4195.09 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 3458], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7504], 00:10:38.047 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10028], 00:10:38.047 | 70.00th=[11076], 80.00th=[12518], 90.00th=[16057], 95.00th=[19006], 00:10:38.047 | 99.00th=[25035], 99.50th=[25560], 99.90th=[25560], 99.95th=[26346], 00:10:38.047 | 99.99th=[28705] 00:10:38.047 write: IOPS=6346, BW=24.8MiB/s (26.0MB/s)(25.0MiB/1007msec); 0 zone resets 00:10:38.047 slat (usec): min=2, max=47487, avg=70.42, stdev=753.79 00:10:38.047 clat (usec): min=1527, max=49000, avg=9115.53, stdev=4407.09 00:10:38.047 lat (usec): min=1540, max=49050, avg=9185.95, stdev=4462.35 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 2769], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6718], 00:10:38.047 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8717], 00:10:38.047 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[12256], 95.00th=[17433], 00:10:38.047 | 99.00th=[31589], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:10:38.047 | 99.99th=[49021] 00:10:38.047 bw ( KiB/s): min=21448, max=28664, per=40.21%, avg=25056.00, stdev=5102.48, samples=2 00:10:38.047 iops : min= 5362, max= 7166, avg=6264.00, stdev=1275.62, samples=2 00:10:38.047 lat (msec) : 2=0.52%, 4=1.82%, 10=67.32%, 20=26.66%, 50=3.68% 00:10:38.047 cpu : usr=6.56%, sys=7.85%, ctx=457, majf=0, minf=1 00:10:38.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:38.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.047 issued rwts: total=6144,6391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.047 job2: (groupid=0, jobs=1): err= 0: pid=367882: Mon Dec 9 05:05:20 2024 00:10:38.047 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:10:38.047 slat (usec): min=2, max=18371, avg=114.80, stdev=882.87 00:10:38.047 clat (usec): min=2769, max=46343, avg=15850.38, stdev=8258.03 00:10:38.047 lat (usec): min=2778, max=46369, avg=15965.18, stdev=8337.22 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 3654], 5.00th=[ 6325], 10.00th=[ 8029], 20.00th=[10028], 00:10:38.047 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12125], 60.00th=[13829], 00:10:38.047 | 70.00th=[20055], 80.00th=[23725], 90.00th=[27132], 95.00th=[31589], 00:10:38.047 | 99.00th=[43254], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:10:38.047 | 99.99th=[46400] 00:10:38.047 write: IOPS=4382, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1008msec); 0 zone resets 00:10:38.047 slat (usec): min=2, max=18032, avg=80.25, stdev=663.53 00:10:38.047 clat (usec): min=223, max=119182, avg=14256.55, stdev=16665.62 00:10:38.047 lat (usec): min=256, max=119194, avg=14336.80, stdev=16709.93 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 758], 5.00th=[ 1680], 10.00th=[ 2933], 20.00th=[ 5276], 00:10:38.047 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[ 10421], 60.00th=[ 11207], 00:10:38.047 | 70.00th=[ 13829], 80.00th=[ 18220], 90.00th=[ 23725], 95.00th=[ 38536], 00:10:38.047 | 99.00th=[109577], 99.50th=[113771], 99.90th=[119014], 99.95th=[119014], 00:10:38.047 | 99.99th=[119014] 00:10:38.047 bw ( KiB/s): min=15120, max=19208, per=27.54%, avg=17164.00, stdev=2890.65, samples=2 00:10:38.047 iops : min= 3780, max= 4802, avg=4291.00, stdev=722.66, samples=2 00:10:38.047 lat (usec) : 250=0.04%, 500=0.08%, 750=0.38%, 1000=1.50% 00:10:38.047 lat (msec) : 2=1.26%, 4=4.44%, 10=24.98%, 20=46.18%, 50=19.30% 00:10:38.047 lat (msec) : 100=1.12%, 250=0.73% 00:10:38.047 cpu : usr=4.67%, sys=7.25%, ctx=302, majf=0, minf=1 00:10:38.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:38.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.047 issued rwts: total=4096,4418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.047 job3: (groupid=0, jobs=1): err= 0: pid=367890: Mon Dec 9 05:05:20 2024 00:10:38.047 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:10:38.047 slat (usec): min=2, max=23456, avg=139.69, stdev=1055.31 00:10:38.047 clat (usec): min=6748, max=48955, avg=18087.96, stdev=6653.63 00:10:38.047 lat (usec): min=6759, max=48984, avg=18227.65, stdev=6746.00 00:10:38.047 clat percentiles (usec): 00:10:38.047 | 1.00th=[ 6783], 5.00th=[ 8160], 10.00th=[13304], 20.00th=[14222], 00:10:38.047 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15401], 60.00th=[17695], 00:10:38.047 | 70.00th=[19006], 80.00th=[22414], 90.00th=[26608], 95.00th=[33424], 00:10:38.047 | 99.00th=[40109], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:38.047 | 99.99th=[49021] 00:10:38.047 write: IOPS=2891, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1012msec); 0 zone resets 00:10:38.047 slat (usec): min=3, max=10381, avg=209.70, stdev=1014.60 00:10:38.047 clat (usec): min=1792, max=99781, avg=28048.51, stdev=22875.65 00:10:38.047 lat (usec): min=1806, max=99800, avg=28258.22, stdev=23015.97 00:10:38.047 clat percentiles (msec): 00:10:38.047 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 10], 00:10:38.047 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 20], 60.00th=[ 27], 00:10:38.047 | 70.00th=[ 36], 80.00th=[ 46], 90.00th=[ 65], 95.00th=[ 81], 00:10:38.047 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 101], 00:10:38.047 | 99.99th=[ 101] 00:10:38.047 bw ( KiB/s): min= 9344, max=13040, per=17.96%, avg=11192.00, stdev=2613.47, samples=2 00:10:38.047 iops : min= 2336, max= 3260, avg=2798.00, stdev=653.37, samples=2 00:10:38.047 lat (msec) : 2=0.36%, 4=0.64%, 10=13.12%, 20=47.08%, 50=30.26% 00:10:38.047 lat (msec) : 100=8.53% 00:10:38.048 cpu : usr=4.06%, sys=4.75%, ctx=254, majf=0, minf=2 00:10:38.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:38.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.048 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.048 00:10:38.048 Run status group 0 (all jobs): 00:10:38.048 READ: bw=56.7MiB/s (59.4MB/s), 9101KiB/s-23.8MiB/s (9320kB/s-25.0MB/s), io=59.3MiB (62.2MB), run=1007-1046msec 00:10:38.048 WRITE: bw=60.9MiB/s (63.8MB/s), 9790KiB/s-24.8MiB/s (10.0MB/s-26.0MB/s), io=63.7MiB (66.7MB), run=1007-1046msec 00:10:38.048 00:10:38.048 Disk stats (read/write): 00:10:38.048 nvme0n1: ios=1874/2048, merge=0/0, ticks=21677/31895, in_queue=53572, util=95.09% 00:10:38.048 nvme0n2: ios=5172/5186, merge=0/0, ticks=47206/40903, in_queue=88109, util=97.94% 00:10:38.048 nvme0n3: ios=3355/3584, merge=0/0, ticks=35632/39382, in_queue=75014, util=99.68% 00:10:38.048 nvme0n4: ios=2144/2560, merge=0/0, ticks=38565/61806, in_queue=100371, util=97.40% 00:10:38.048 05:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:38.048 [global] 00:10:38.048 thread=1 00:10:38.048 invalidate=1 00:10:38.048 rw=randwrite 00:10:38.048 time_based=1 00:10:38.048 runtime=1 00:10:38.048 ioengine=libaio 00:10:38.048 direct=1 00:10:38.048 bs=4096 00:10:38.048 iodepth=128 00:10:38.048 norandommap=0 00:10:38.048 numjobs=1 00:10:38.048 00:10:38.048 verify_dump=1 00:10:38.048 verify_backlog=512 00:10:38.048 verify_state_save=0 00:10:38.048 do_verify=1 00:10:38.048 verify=crc32c-intel 00:10:38.048 [job0] 00:10:38.048 filename=/dev/nvme0n1 00:10:38.048 [job1] 00:10:38.048 filename=/dev/nvme0n2 00:10:38.048 [job2] 00:10:38.048 filename=/dev/nvme0n3 00:10:38.048 [job3] 00:10:38.048 filename=/dev/nvme0n4 00:10:38.048 Could not set queue depth (nvme0n1) 00:10:38.048 Could not set queue depth (nvme0n2) 00:10:38.048 Could not set queue depth (nvme0n3) 00:10:38.048 Could not set queue depth (nvme0n4) 00:10:38.612 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.612 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.613 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.613 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.613 fio-3.35 00:10:38.613 Starting 4 threads 00:10:39.988 00:10:39.988 job0: (groupid=0, jobs=1): err= 0: pid=368313: Mon Dec 9 05:05:22 2024 00:10:39.988 read: IOPS=4016, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:10:39.988 slat (nsec): min=1765, max=10686k, avg=108727.94, stdev=731731.94 00:10:39.988 clat (usec): min=1329, max=56372, avg=14541.72, stdev=7114.55 00:10:39.988 lat (usec): min=2921, max=56380, avg=14650.45, stdev=7148.14 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 4424], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[10159], 00:10:39.988 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:10:39.988 | 70.00th=[14353], 80.00th=[17695], 90.00th=[22414], 95.00th=[27395], 00:10:39.988 | 99.00th=[42730], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:10:39.988 | 99.99th=[56361] 00:10:39.988 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:39.988 slat (usec): min=2, max=15376, avg=120.63, stdev=753.80 00:10:39.988 clat (usec): min=923, max=68294, avg=16754.76, stdev=11720.55 00:10:39.988 lat (usec): min=932, max=68304, avg=16875.38, stdev=11778.93 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 5145], 5.00th=[ 6587], 10.00th=[ 8160], 20.00th=[10290], 00:10:39.988 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12649], 60.00th=[13960], 00:10:39.988 | 70.00th=[16581], 80.00th=[20841], 90.00th=[25822], 95.00th=[47973], 00:10:39.988 | 99.00th=[62129], 99.50th=[65799], 99.90th=[66847], 99.95th=[68682], 00:10:39.988 | 99.99th=[68682] 00:10:39.988 bw ( KiB/s): min=16304, max=16464, per=22.30%, avg=16384.00, stdev=113.14, samples=2 00:10:39.988 iops : min= 4076, max= 4116, avg=4096.00, stdev=28.28, samples=2 00:10:39.988 lat (usec) : 1000=0.05% 00:10:39.988 lat (msec) : 2=0.01%, 4=0.17%, 10=16.83%, 20=63.66%, 50=16.56% 00:10:39.988 lat (msec) : 100=2.72% 00:10:39.988 cpu : usr=3.29%, sys=5.48%, ctx=425, majf=0, minf=1 00:10:39.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:39.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.988 issued rwts: total=4033,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.988 job1: (groupid=0, jobs=1): err= 0: pid=368335: Mon Dec 9 05:05:22 2024 00:10:39.988 read: IOPS=5951, BW=23.2MiB/s (24.4MB/s)(23.4MiB/1008msec) 00:10:39.988 slat (usec): min=2, max=9057, avg=82.75, stdev=493.56 00:10:39.988 clat (usec): min=2865, max=29901, avg=10782.23, stdev=2555.61 00:10:39.988 lat (usec): min=5240, max=29910, avg=10864.98, stdev=2592.32 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 6849], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[ 9372], 00:10:39.988 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:10:39.988 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13435], 95.00th=[14877], 00:10:39.988 | 99.00th=[21103], 99.50th=[21627], 99.90th=[25297], 99.95th=[25822], 00:10:39.988 | 99.99th=[30016] 00:10:39.988 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:10:39.988 slat (usec): min=3, max=8225, avg=72.01, stdev=395.68 00:10:39.988 clat (usec): min=4952, max=30508, avg=10214.57, stdev=1962.51 00:10:39.988 lat (usec): min=4966, max=30531, avg=10286.59, stdev=1986.76 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 6325], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:10:39.988 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:39.988 | 70.00th=[10159], 80.00th=[10421], 90.00th=[12125], 95.00th=[13566], 00:10:39.988 | 99.00th=[21103], 99.50th=[21627], 99.90th=[25035], 99.95th=[25560], 00:10:39.988 | 99.99th=[30540] 00:10:39.988 bw ( KiB/s): min=24576, max=24576, per=33.46%, avg=24576.00, stdev= 0.00, samples=2 00:10:39.988 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:39.988 lat (msec) : 4=0.01%, 10=54.62%, 20=43.23%, 50=2.13% 00:10:39.988 cpu : usr=7.94%, sys=10.03%, ctx=533, majf=0, minf=1 00:10:39.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:39.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.988 issued rwts: total=5999,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.988 job2: (groupid=0, jobs=1): err= 0: pid=368350: Mon Dec 9 05:05:22 2024 00:10:39.988 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:10:39.988 slat (usec): min=2, max=21732, avg=154.96, stdev=1087.59 00:10:39.988 clat (usec): min=3253, max=61298, avg=17758.56, stdev=8548.95 00:10:39.988 lat (usec): min=3283, max=61301, avg=17913.52, stdev=8633.89 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 6128], 5.00th=[10290], 10.00th=[11338], 20.00th=[12256], 00:10:39.988 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[16319], 00:10:39.988 | 70.00th=[19792], 80.00th=[23987], 90.00th=[28443], 95.00th=[35914], 00:10:39.988 | 99.00th=[50070], 99.50th=[55837], 99.90th=[61080], 99.95th=[61080], 00:10:39.988 | 99.99th=[61080] 00:10:39.988 write: IOPS=3186, BW=12.4MiB/s (13.1MB/s)(12.6MiB/1012msec); 0 zone resets 00:10:39.988 slat (usec): min=3, max=9378, avg=152.61, stdev=676.07 00:10:39.988 clat (usec): min=1856, max=61296, avg=22894.91, stdev=12311.43 00:10:39.988 lat (usec): min=1876, max=61300, avg=23047.52, stdev=12384.32 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 2671], 5.00th=[ 6783], 10.00th=[10159], 20.00th=[12125], 00:10:39.988 | 30.00th=[13304], 40.00th=[18482], 50.00th=[20841], 60.00th=[22152], 00:10:39.988 | 70.00th=[27395], 80.00th=[34866], 90.00th=[42206], 95.00th=[47449], 00:10:39.988 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[61080], 00:10:39.988 | 99.99th=[61080] 00:10:39.988 bw ( KiB/s): min=12288, max=12496, per=16.87%, avg=12392.00, stdev=147.08, samples=2 00:10:39.988 iops : min= 3072, max= 3124, avg=3098.00, stdev=36.77, samples=2 00:10:39.988 lat (msec) : 2=0.30%, 4=0.64%, 10=5.94%, 20=50.63%, 50=40.86% 00:10:39.988 lat (msec) : 100=1.64% 00:10:39.988 cpu : usr=2.97%, sys=5.24%, ctx=380, majf=0, minf=2 00:10:39.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:39.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.988 issued rwts: total=3072,3225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.988 job3: (groupid=0, jobs=1): err= 0: pid=368357: Mon Dec 9 05:05:22 2024 00:10:39.988 read: IOPS=4791, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1008msec) 00:10:39.988 slat (nsec): min=1762, max=26939k, avg=100449.32, stdev=779093.43 00:10:39.988 clat (usec): min=2901, max=42533, avg=13224.46, stdev=4817.61 00:10:39.988 lat (usec): min=4171, max=42543, avg=13324.91, stdev=4846.98 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10945], 00:10:39.988 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:10:39.988 | 70.00th=[13304], 80.00th=[14091], 90.00th=[16909], 95.00th=[19792], 00:10:39.988 | 99.00th=[36963], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:10:39.988 | 99.99th=[42730] 00:10:39.988 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:10:39.988 slat (usec): min=2, max=10876, avg=91.02, stdev=579.56 00:10:39.988 clat (usec): min=3231, max=30764, avg=12382.03, stdev=3536.34 00:10:39.988 lat (usec): min=3245, max=30775, avg=12473.05, stdev=3563.42 00:10:39.988 clat percentiles (usec): 00:10:39.988 | 1.00th=[ 4948], 5.00th=[ 6980], 10.00th=[ 8160], 20.00th=[10945], 00:10:39.988 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:10:39.988 | 70.00th=[12649], 80.00th=[13435], 90.00th=[17695], 95.00th=[19006], 00:10:39.988 | 99.00th=[23462], 99.50th=[24249], 99.90th=[30802], 99.95th=[30802], 00:10:39.988 | 99.99th=[30802] 00:10:39.988 bw ( KiB/s): min=20480, max=20480, per=27.88%, avg=20480.00, stdev= 0.00, samples=2 00:10:39.988 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:39.988 lat (msec) : 4=0.23%, 10=15.65%, 20=79.62%, 50=4.50% 00:10:39.988 cpu : usr=5.66%, sys=8.04%, ctx=433, majf=0, minf=1 00:10:39.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:39.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.988 issued rwts: total=4830,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.988 00:10:39.988 Run status group 0 (all jobs): 00:10:39.988 READ: bw=69.2MiB/s (72.6MB/s), 11.9MiB/s-23.2MiB/s (12.4MB/s-24.4MB/s), io=70.1MiB (73.5MB), run=1004-1012msec 00:10:39.988 WRITE: bw=71.7MiB/s (75.2MB/s), 12.4MiB/s-23.8MiB/s (13.1MB/s-25.0MB/s), io=72.6MiB (76.1MB), run=1004-1012msec 00:10:39.989 00:10:39.989 Disk stats (read/write): 00:10:39.989 nvme0n1: ios=3022/3072, merge=0/0, ticks=30335/32045, in_queue=62380, util=98.60% 00:10:39.989 nvme0n2: ios=5018/5120, merge=0/0, ticks=28230/27681, in_queue=55911, util=99.38% 00:10:39.989 nvme0n3: ios=2112/2560, merge=0/0, ticks=38180/61933, in_queue=100113, util=87.93% 00:10:39.989 nvme0n4: ios=4093/4096, merge=0/0, ticks=41598/34220, in_queue=75818, util=98.36% 00:10:39.989 05:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:39.989 05:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=368482 00:10:39.989 05:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:39.989 05:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:39.989 [global] 00:10:39.989 thread=1 00:10:39.989 invalidate=1 00:10:39.989 rw=read 00:10:39.989 time_based=1 00:10:39.989 runtime=10 00:10:39.989 ioengine=libaio 00:10:39.989 direct=1 00:10:39.989 bs=4096 00:10:39.989 iodepth=1 00:10:39.989 norandommap=1 00:10:39.989 numjobs=1 00:10:39.989 00:10:39.989 [job0] 00:10:39.989 filename=/dev/nvme0n1 00:10:39.989 [job1] 00:10:39.989 filename=/dev/nvme0n2 00:10:39.989 [job2] 00:10:39.989 filename=/dev/nvme0n3 00:10:39.989 [job3] 00:10:39.989 filename=/dev/nvme0n4 00:10:39.989 Could not set queue depth (nvme0n1) 00:10:39.989 Could not set queue depth (nvme0n2) 00:10:39.989 Could not set queue depth (nvme0n3) 00:10:39.989 Could not set queue depth (nvme0n4) 00:10:39.989 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.989 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.989 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.989 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.989 fio-3.35 00:10:39.989 Starting 4 threads 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:43.283 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33521664, buflen=4096 00:10:43.283 fio: pid=368792, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:43.283 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45391872, buflen=4096 00:10:43.283 fio: pid=368784, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:43.283 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52092928, buflen=4096 00:10:43.283 fio: pid=368759, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.283 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:43.543 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=38133760, buflen=4096 00:10:43.543 fio: pid=368771, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:43.543 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.543 05:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:43.543 00:10:43.543 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=368759: Mon Dec 9 05:05:25 2024 00:10:43.543 read: IOPS=4168, BW=16.3MiB/s (17.1MB/s)(49.7MiB/3051msec) 00:10:43.543 slat (usec): min=6, max=13253, avg=12.42, stdev=157.16 00:10:43.543 clat (usec): min=170, max=562, avg=223.91, stdev=18.02 00:10:43.543 lat (usec): min=179, max=13502, avg=236.33, stdev=158.69 00:10:43.543 clat percentiles (usec): 00:10:43.543 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:10:43.543 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:43.543 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:10:43.543 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 326], 00:10:43.543 | 99.99th=[ 474] 00:10:43.543 bw ( KiB/s): min=16672, max=17624, per=33.69%, avg=17012.80, stdev=383.77, samples=5 00:10:43.543 iops : min= 4168, max= 4406, avg=4253.20, stdev=95.94, samples=5 00:10:43.543 lat (usec) : 250=92.79%, 500=7.19%, 750=0.01% 00:10:43.543 cpu : usr=3.38%, sys=7.05%, ctx=12724, majf=0, minf=1 00:10:43.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 issued rwts: total=12719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.543 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=368771: Mon Dec 9 05:05:25 2024 00:10:43.543 read: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(36.4MiB/3271msec) 00:10:43.543 slat (usec): min=8, max=29682, avg=20.01, stdev=414.84 00:10:43.543 clat (usec): min=166, max=41052, avg=326.52, stdev=1885.69 00:10:43.543 lat (usec): min=175, max=41065, avg=346.53, stdev=1930.96 00:10:43.543 clat percentiles (usec): 00:10:43.543 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:10:43.543 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:10:43.543 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 306], 00:10:43.543 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[41157], 99.95th=[41157], 00:10:43.543 | 99.99th=[41157] 00:10:43.543 bw ( KiB/s): min= 104, max=16408, per=22.29%, avg=11257.83, stdev=6699.08, samples=6 00:10:43.543 iops : min= 26, max= 4102, avg=2814.33, stdev=1674.70, samples=6 00:10:43.543 lat (usec) : 250=77.52%, 500=22.11%, 750=0.09% 00:10:43.543 lat (msec) : 2=0.03%, 4=0.02%, 50=0.21% 00:10:43.543 cpu : usr=1.96%, sys=5.17%, ctx=9317, majf=0, minf=1 00:10:43.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 issued rwts: total=9311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.543 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=368784: Mon Dec 9 05:05:25 2024 00:10:43.543 read: IOPS=3894, BW=15.2MiB/s (15.9MB/s)(43.3MiB/2846msec) 00:10:43.543 slat (usec): min=8, max=15065, avg=13.58, stdev=181.59 00:10:43.543 clat (usec): min=182, max=719, avg=239.41, stdev=24.34 00:10:43.543 lat (usec): min=191, max=15273, avg=252.99, stdev=183.13 00:10:43.543 clat percentiles (usec): 00:10:43.543 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:10:43.543 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:10:43.543 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:10:43.543 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 392], 99.95th=[ 424], 00:10:43.543 | 99.99th=[ 545] 00:10:43.543 bw ( KiB/s): min=15088, max=17080, per=31.39%, avg=15851.20, stdev=832.84, samples=5 00:10:43.543 iops : min= 3772, max= 4270, avg=3962.80, stdev=208.21, samples=5 00:10:43.543 lat (usec) : 250=71.97%, 500=28.01%, 750=0.02% 00:10:43.543 cpu : usr=2.57%, sys=6.22%, ctx=11088, majf=0, minf=2 00:10:43.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 issued rwts: total=11083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.543 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=368792: Mon Dec 9 05:05:25 2024 00:10:43.543 read: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(32.0MiB/2620msec) 00:10:43.543 slat (nsec): min=4274, max=37215, avg=8971.60, stdev=1125.30 00:10:43.543 clat (usec): min=187, max=41011, avg=308.80, stdev=1493.56 00:10:43.543 lat (usec): min=196, max=41045, avg=317.78, stdev=1494.01 00:10:43.543 clat percentiles (usec): 00:10:43.543 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:10:43.543 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:10:43.543 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:10:43.543 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[41157], 99.95th=[41157], 00:10:43.543 | 99.99th=[41157] 00:10:43.543 bw ( KiB/s): min= 1400, max=15512, per=24.82%, avg=12531.20, stdev=6231.23, samples=5 00:10:43.543 iops : min= 350, max= 3878, avg=3132.80, stdev=1557.81, samples=5 00:10:43.543 lat (usec) : 250=54.31%, 500=45.49%, 750=0.04% 00:10:43.543 lat (msec) : 2=0.01%, 10=0.01%, 50=0.13% 00:10:43.543 cpu : usr=1.30%, sys=3.36%, ctx=8186, majf=0, minf=2 00:10:43.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.543 issued rwts: total=8185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.543 00:10:43.543 Run status group 0 (all jobs): 00:10:43.543 READ: bw=49.3MiB/s (51.7MB/s), 11.1MiB/s-16.3MiB/s (11.7MB/s-17.1MB/s), io=161MiB (169MB), run=2620-3271msec 00:10:43.543 00:10:43.543 Disk stats (read/write): 00:10:43.543 nvme0n1: ios=11881/0, merge=0/0, ticks=3247/0, in_queue=3247, util=99.23% 00:10:43.543 nvme0n2: ios=8701/0, merge=0/0, ticks=2791/0, in_queue=2791, util=93.58% 00:10:43.543 nvme0n3: ios=11115/0, merge=0/0, ticks=3176/0, in_queue=3176, util=99.18% 00:10:43.543 nvme0n4: ios=8069/0, merge=0/0, ticks=2445/0, in_queue=2445, util=96.37% 00:10:43.802 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:43.802 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:44.060 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.061 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:44.061 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.061 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:44.319 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.319 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:44.578 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:44.578 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 368482 00:10:44.578 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:44.579 05:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.579 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.579 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:44.579 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:44.579 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:44.837 nvmf hotplug test: fio failed as expected 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.837 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.097 rmmod nvme_tcp 00:10:45.097 rmmod nvme_fabrics 00:10:45.097 rmmod nvme_keyring 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 365381 ']' 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 365381 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 365381 ']' 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 365381 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365381 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365381' 00:10:45.097 killing process with pid 365381 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 365381 00:10:45.097 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 365381 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.356 05:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.275 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.275 00:10:47.275 real 0m29.438s 00:10:47.275 user 2m4.243s 00:10:47.275 sys 0m11.266s 00:10:47.275 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.275 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.275 ************************************ 00:10:47.275 END TEST nvmf_fio_target 00:10:47.275 ************************************ 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.534 ************************************ 00:10:47.534 START TEST nvmf_bdevio 00:10:47.534 ************************************ 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:47.534 * Looking for test storage... 00:10:47.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:47.534 05:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.534 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.534 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.794 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.795 --rc genhtml_branch_coverage=1 00:10:47.795 --rc genhtml_function_coverage=1 00:10:47.795 --rc genhtml_legend=1 00:10:47.795 --rc geninfo_all_blocks=1 00:10:47.795 --rc geninfo_unexecuted_blocks=1 00:10:47.795 00:10:47.795 ' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.795 --rc genhtml_branch_coverage=1 00:10:47.795 --rc genhtml_function_coverage=1 00:10:47.795 --rc genhtml_legend=1 00:10:47.795 --rc geninfo_all_blocks=1 00:10:47.795 --rc geninfo_unexecuted_blocks=1 00:10:47.795 00:10:47.795 ' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.795 --rc genhtml_branch_coverage=1 00:10:47.795 --rc genhtml_function_coverage=1 00:10:47.795 --rc genhtml_legend=1 00:10:47.795 --rc geninfo_all_blocks=1 00:10:47.795 --rc geninfo_unexecuted_blocks=1 00:10:47.795 00:10:47.795 ' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.795 --rc genhtml_branch_coverage=1 00:10:47.795 --rc genhtml_function_coverage=1 00:10:47.795 --rc genhtml_legend=1 00:10:47.795 --rc geninfo_all_blocks=1 00:10:47.795 --rc geninfo_unexecuted_blocks=1 00:10:47.795 00:10:47.795 ' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.795 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.796 05:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:55.930 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:55.930 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:55.930 Found net devices under 0000:af:00.0: cvl_0_0 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:55.930 Found net devices under 0000:af:00.1: cvl_0_1 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.930 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:10:55.931 00:10:55.931 --- 10.0.0.2 ping statistics --- 00:10:55.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.931 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:55.931 00:10:55.931 --- 10.0.0.1 ping statistics --- 00:10:55.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.931 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=373424 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 373424 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 373424 ']' 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.931 05:05:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 [2024-12-09 05:05:37.450449] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:10:55.931 [2024-12-09 05:05:37.450499] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.931 [2024-12-09 05:05:37.544879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.931 [2024-12-09 05:05:37.586300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.931 [2024-12-09 05:05:37.586335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.931 [2024-12-09 05:05:37.586344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.931 [2024-12-09 05:05:37.586353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.931 [2024-12-09 05:05:37.586360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.931 [2024-12-09 05:05:37.588169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:55.931 [2024-12-09 05:05:37.588279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:55.931 [2024-12-09 05:05:37.588374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.931 [2024-12-09 05:05:37.588374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 [2024-12-09 05:05:38.323778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 Malloc0 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.931 [2024-12-09 05:05:38.393527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.931 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.191 { 00:10:56.191 "params": { 00:10:56.191 "name": "Nvme$subsystem", 00:10:56.191 "trtype": "$TEST_TRANSPORT", 00:10:56.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.191 "adrfam": "ipv4", 00:10:56.191 "trsvcid": "$NVMF_PORT", 00:10:56.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.191 "hdgst": ${hdgst:-false}, 00:10:56.191 "ddgst": ${ddgst:-false} 00:10:56.191 }, 00:10:56.191 "method": "bdev_nvme_attach_controller" 00:10:56.191 } 00:10:56.191 EOF 00:10:56.191 )") 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:56.191 05:05:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.191 "params": { 00:10:56.191 "name": "Nvme1", 00:10:56.191 "trtype": "tcp", 00:10:56.191 "traddr": "10.0.0.2", 00:10:56.191 "adrfam": "ipv4", 00:10:56.191 "trsvcid": "4420", 00:10:56.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.191 "hdgst": false, 00:10:56.191 "ddgst": false 00:10:56.191 }, 00:10:56.191 "method": "bdev_nvme_attach_controller" 00:10:56.191 }' 00:10:56.191 [2024-12-09 05:05:38.448685] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:10:56.191 [2024-12-09 05:05:38.448732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373570 ] 00:10:56.191 [2024-12-09 05:05:38.544120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.191 [2024-12-09 05:05:38.586112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.191 [2024-12-09 05:05:38.586241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.191 [2024-12-09 05:05:38.586241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.450 I/O targets: 00:10:56.450 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:56.450 00:10:56.450 00:10:56.450 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.450 http://cunit.sourceforge.net/ 00:10:56.450 00:10:56.450 00:10:56.450 Suite: bdevio tests on: Nvme1n1 00:10:56.709 Test: blockdev write read block ...passed 00:10:56.709 Test: blockdev write zeroes read block ...passed 00:10:56.709 Test: blockdev write zeroes read no split ...passed 00:10:56.709 Test: blockdev write zeroes read split ...passed 00:10:56.709 Test: blockdev write zeroes read split partial ...passed 00:10:56.709 Test: blockdev reset ...[2024-12-09 05:05:39.017963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:56.709 [2024-12-09 05:05:39.018028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128e890 (9): Bad file descriptor 00:10:56.709 [2024-12-09 05:05:39.071718] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:56.709 passed 00:10:56.709 Test: blockdev write read 8 blocks ...passed 00:10:56.709 Test: blockdev write read size > 128k ...passed 00:10:56.709 Test: blockdev write read invalid size ...passed 00:10:56.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.709 Test: blockdev write read max offset ...passed 00:10:56.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.968 Test: blockdev writev readv 8 blocks ...passed 00:10:56.968 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.968 Test: blockdev writev readv block ...passed 00:10:56.968 Test: blockdev writev readv size > 128k ...passed 00:10:56.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.968 Test: blockdev comparev and writev ...[2024-12-09 05:05:39.326048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.326904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.968 [2024-12-09 05:05:39.326915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.968 passed 00:10:56.968 Test: blockdev nvme passthru rw ...passed 00:10:56.968 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:05:39.408566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.968 [2024-12-09 05:05:39.408585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.408696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.968 [2024-12-09 05:05:39.408708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.408811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.968 [2024-12-09 05:05:39.408823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.968 [2024-12-09 05:05:39.408926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.968 [2024-12-09 05:05:39.408938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.968 passed 00:10:56.968 Test: blockdev nvme admin passthru ...passed 00:10:57.227 Test: blockdev copy ...passed 00:10:57.227 00:10:57.227 Run Summary: Type Total Ran Passed Failed Inactive 00:10:57.227 suites 1 1 n/a 0 0 00:10:57.227 tests 23 23 23 0 0 00:10:57.227 asserts 152 152 152 0 n/a 00:10:57.227 00:10:57.227 Elapsed time = 1.124 seconds 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.227 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.227 rmmod nvme_tcp 00:10:57.227 rmmod nvme_fabrics 00:10:57.227 rmmod nvme_keyring 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 373424 ']' 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 373424 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 373424 ']' 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 373424 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373424 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373424' 00:10:57.487 killing process with pid 373424 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 373424 00:10:57.487 05:05:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 373424 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.746 05:05:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.668 05:05:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.668 00:10:59.668 real 0m12.270s 00:10:59.668 user 0m14.136s 00:10:59.668 sys 0m6.291s 00:10:59.668 05:05:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.668 05:05:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.668 ************************************ 00:10:59.668 END TEST nvmf_bdevio 00:10:59.668 ************************************ 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:59.928 00:10:59.928 real 5m5.197s 00:10:59.928 user 11m3.517s 00:10:59.928 sys 2m2.805s 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.928 ************************************ 00:10:59.928 END TEST nvmf_target_core 00:10:59.928 ************************************ 00:10:59.928 05:05:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.928 05:05:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.928 05:05:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.928 05:05:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:59.928 ************************************ 00:10:59.928 START TEST nvmf_target_extra 00:10:59.928 ************************************ 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:59.928 * Looking for test storage... 00:10:59.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.928 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.189 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.190 --rc genhtml_branch_coverage=1 00:11:00.190 --rc genhtml_function_coverage=1 00:11:00.190 --rc genhtml_legend=1 00:11:00.190 --rc geninfo_all_blocks=1 00:11:00.190 --rc geninfo_unexecuted_blocks=1 00:11:00.190 00:11:00.190 ' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.190 --rc genhtml_branch_coverage=1 00:11:00.190 --rc genhtml_function_coverage=1 00:11:00.190 --rc genhtml_legend=1 00:11:00.190 --rc geninfo_all_blocks=1 00:11:00.190 --rc geninfo_unexecuted_blocks=1 00:11:00.190 00:11:00.190 ' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.190 --rc genhtml_branch_coverage=1 00:11:00.190 --rc genhtml_function_coverage=1 00:11:00.190 --rc genhtml_legend=1 00:11:00.190 --rc geninfo_all_blocks=1 00:11:00.190 --rc geninfo_unexecuted_blocks=1 00:11:00.190 00:11:00.190 ' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.190 --rc genhtml_branch_coverage=1 00:11:00.190 --rc genhtml_function_coverage=1 00:11:00.190 --rc genhtml_legend=1 00:11:00.190 --rc geninfo_all_blocks=1 00:11:00.190 --rc geninfo_unexecuted_blocks=1 00:11:00.190 00:11:00.190 ' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.190 ************************************ 00:11:00.190 START TEST nvmf_example 00:11:00.190 ************************************ 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:00.190 * Looking for test storage... 00:11:00.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.190 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.453 --rc genhtml_branch_coverage=1 00:11:00.453 --rc genhtml_function_coverage=1 00:11:00.453 --rc genhtml_legend=1 00:11:00.453 --rc geninfo_all_blocks=1 00:11:00.453 --rc geninfo_unexecuted_blocks=1 00:11:00.453 00:11:00.453 ' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.453 --rc genhtml_branch_coverage=1 00:11:00.453 --rc genhtml_function_coverage=1 00:11:00.453 --rc genhtml_legend=1 00:11:00.453 --rc geninfo_all_blocks=1 00:11:00.453 --rc geninfo_unexecuted_blocks=1 00:11:00.453 00:11:00.453 ' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.453 --rc genhtml_branch_coverage=1 00:11:00.453 --rc genhtml_function_coverage=1 00:11:00.453 --rc genhtml_legend=1 00:11:00.453 --rc geninfo_all_blocks=1 00:11:00.453 --rc geninfo_unexecuted_blocks=1 00:11:00.453 00:11:00.453 ' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.453 --rc genhtml_branch_coverage=1 00:11:00.453 --rc genhtml_function_coverage=1 00:11:00.453 --rc genhtml_legend=1 00:11:00.453 --rc geninfo_all_blocks=1 00:11:00.453 --rc geninfo_unexecuted_blocks=1 00:11:00.453 00:11:00.453 ' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:00.453 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.454 05:05:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:08.583 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:08.583 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:08.583 Found net devices under 0000:af:00.0: cvl_0_0 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:08.583 Found net devices under 0000:af:00.1: cvl_0_1 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.583 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.584 05:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:11:08.584 00:11:08.584 --- 10.0.0.2 ping statistics --- 00:11:08.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.584 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:11:08.584 00:11:08.584 --- 10.0.0.1 ping statistics --- 00:11:08.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.584 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=377754 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 377754 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 377754 ']' 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.584 05:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.584 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.584 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:08.584 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.584 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.584 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:08.844 05:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:21.067 Initializing NVMe Controllers 00:11:21.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:21.067 Initialization complete. Launching workers. 00:11:21.067 ======================================================== 00:11:21.067 Latency(us) 00:11:21.067 Device Information : IOPS MiB/s Average min max 00:11:21.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18616.59 72.72 3437.23 560.09 15448.15 00:11:21.067 ======================================================== 00:11:21.067 Total : 18616.59 72.72 3437.23 560.09 15448.15 00:11:21.067 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.067 rmmod nvme_tcp 00:11:21.067 rmmod nvme_fabrics 00:11:21.067 rmmod nvme_keyring 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 377754 ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 377754 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 377754 ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 377754 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 377754 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 377754' 00:11:21.067 killing process with pid 377754 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 377754 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 377754 00:11:21.067 nvmf threads initialize successfully 00:11:21.067 bdev subsystem init successfully 00:11:21.067 created a nvmf target service 00:11:21.067 create targets's poll groups done 00:11:21.067 all subsystems of target started 00:11:21.067 nvmf target is running 00:11:21.067 all subsystems of target stopped 00:11:21.067 destroy targets's poll groups done 00:11:21.067 destroyed the nvmf target service 00:11:21.067 bdev subsystem finish successfully 00:11:21.067 nvmf threads destroy successfully 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.067 05:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.638 00:11:21.638 real 0m21.441s 00:11:21.638 user 0m46.455s 00:11:21.638 sys 0m7.742s 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.638 ************************************ 00:11:21.638 END TEST nvmf_example 00:11:21.638 ************************************ 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.638 05:06:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.638 ************************************ 00:11:21.638 START TEST nvmf_filesystem 00:11:21.638 ************************************ 00:11:21.638 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:21.901 * Looking for test storage... 00:11:21.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.901 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.902 --rc genhtml_branch_coverage=1 00:11:21.902 --rc genhtml_function_coverage=1 00:11:21.902 --rc genhtml_legend=1 00:11:21.902 --rc geninfo_all_blocks=1 00:11:21.902 --rc geninfo_unexecuted_blocks=1 00:11:21.902 00:11:21.902 ' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.902 --rc genhtml_branch_coverage=1 00:11:21.902 --rc genhtml_function_coverage=1 00:11:21.902 --rc genhtml_legend=1 00:11:21.902 --rc geninfo_all_blocks=1 00:11:21.902 --rc geninfo_unexecuted_blocks=1 00:11:21.902 00:11:21.902 ' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.902 --rc genhtml_branch_coverage=1 00:11:21.902 --rc genhtml_function_coverage=1 00:11:21.902 --rc genhtml_legend=1 00:11:21.902 --rc geninfo_all_blocks=1 00:11:21.902 --rc geninfo_unexecuted_blocks=1 00:11:21.902 00:11:21.902 ' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.902 --rc genhtml_branch_coverage=1 00:11:21.902 --rc genhtml_function_coverage=1 00:11:21.902 --rc genhtml_legend=1 00:11:21.902 --rc geninfo_all_blocks=1 00:11:21.902 --rc geninfo_unexecuted_blocks=1 00:11:21.902 00:11:21.902 ' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:21.902 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:21.903 #define SPDK_CONFIG_H 00:11:21.903 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:21.903 #define SPDK_CONFIG_APPS 1 00:11:21.903 #define SPDK_CONFIG_ARCH native 00:11:21.903 #undef SPDK_CONFIG_ASAN 00:11:21.903 #undef SPDK_CONFIG_AVAHI 00:11:21.903 #undef SPDK_CONFIG_CET 00:11:21.903 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:21.903 #define SPDK_CONFIG_COVERAGE 1 00:11:21.903 #define SPDK_CONFIG_CROSS_PREFIX 00:11:21.903 #undef SPDK_CONFIG_CRYPTO 00:11:21.903 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:21.903 #undef SPDK_CONFIG_CUSTOMOCF 00:11:21.903 #undef SPDK_CONFIG_DAOS 00:11:21.903 #define SPDK_CONFIG_DAOS_DIR 00:11:21.903 #define SPDK_CONFIG_DEBUG 1 00:11:21.903 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:21.903 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:21.903 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:21.903 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:21.903 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:21.903 #undef SPDK_CONFIG_DPDK_UADK 00:11:21.903 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:21.903 #define SPDK_CONFIG_EXAMPLES 1 00:11:21.903 #undef SPDK_CONFIG_FC 00:11:21.903 #define SPDK_CONFIG_FC_PATH 00:11:21.903 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:21.903 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:21.903 #define SPDK_CONFIG_FSDEV 1 00:11:21.903 #undef SPDK_CONFIG_FUSE 00:11:21.903 #undef SPDK_CONFIG_FUZZER 00:11:21.903 #define SPDK_CONFIG_FUZZER_LIB 00:11:21.903 #undef SPDK_CONFIG_GOLANG 00:11:21.903 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:21.903 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:21.903 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:21.903 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:21.903 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:21.903 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:21.903 #undef SPDK_CONFIG_HAVE_LZ4 00:11:21.903 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:21.903 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:21.903 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:21.903 #define SPDK_CONFIG_IDXD 1 00:11:21.903 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:21.903 #undef SPDK_CONFIG_IPSEC_MB 00:11:21.903 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:21.903 #define SPDK_CONFIG_ISAL 1 00:11:21.903 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:21.903 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:21.903 #define SPDK_CONFIG_LIBDIR 00:11:21.903 #undef SPDK_CONFIG_LTO 00:11:21.903 #define SPDK_CONFIG_MAX_LCORES 128 00:11:21.903 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:21.903 #define SPDK_CONFIG_NVME_CUSE 1 00:11:21.903 #undef SPDK_CONFIG_OCF 00:11:21.903 #define SPDK_CONFIG_OCF_PATH 00:11:21.903 #define SPDK_CONFIG_OPENSSL_PATH 00:11:21.903 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:21.903 #define SPDK_CONFIG_PGO_DIR 00:11:21.903 #undef SPDK_CONFIG_PGO_USE 00:11:21.903 #define SPDK_CONFIG_PREFIX /usr/local 00:11:21.903 #undef SPDK_CONFIG_RAID5F 00:11:21.903 #undef SPDK_CONFIG_RBD 00:11:21.903 #define SPDK_CONFIG_RDMA 1 00:11:21.903 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:21.903 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:21.903 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:21.903 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:21.903 #define SPDK_CONFIG_SHARED 1 00:11:21.903 #undef SPDK_CONFIG_SMA 00:11:21.903 #define SPDK_CONFIG_TESTS 1 00:11:21.903 #undef SPDK_CONFIG_TSAN 00:11:21.903 #define SPDK_CONFIG_UBLK 1 00:11:21.903 #define SPDK_CONFIG_UBSAN 1 00:11:21.903 #undef SPDK_CONFIG_UNIT_TESTS 00:11:21.903 #undef SPDK_CONFIG_URING 00:11:21.903 #define SPDK_CONFIG_URING_PATH 00:11:21.903 #undef SPDK_CONFIG_URING_ZNS 00:11:21.903 #undef SPDK_CONFIG_USDT 00:11:21.903 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:21.903 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:21.903 #define SPDK_CONFIG_VFIO_USER 1 00:11:21.903 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:21.903 #define SPDK_CONFIG_VHOST 1 00:11:21.903 #define SPDK_CONFIG_VIRTIO 1 00:11:21.903 #undef SPDK_CONFIG_VTUNE 00:11:21.903 #define SPDK_CONFIG_VTUNE_DIR 00:11:21.903 #define SPDK_CONFIG_WERROR 1 00:11:21.903 #define SPDK_CONFIG_WPDK_DIR 00:11:21.903 #undef SPDK_CONFIG_XNVME 00:11:21.903 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.903 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:21.904 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.905 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 380500 ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 380500 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.DB57QU 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DB57QU/tests/target /tmp/spdk.DB57QU 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:21.906 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56022364160 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730635776 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5708271616 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30855286784 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865317888 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323082240 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346130432 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23048192 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30865137664 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865317888 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=180224 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173048832 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173061120 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:22.167 * Looking for test storage... 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56022364160 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:22.167 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7922864128 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.168 --rc genhtml_branch_coverage=1 00:11:22.168 --rc genhtml_function_coverage=1 00:11:22.168 --rc genhtml_legend=1 00:11:22.168 --rc geninfo_all_blocks=1 00:11:22.168 --rc geninfo_unexecuted_blocks=1 00:11:22.168 00:11:22.168 ' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.168 --rc genhtml_branch_coverage=1 00:11:22.168 --rc genhtml_function_coverage=1 00:11:22.168 --rc genhtml_legend=1 00:11:22.168 --rc geninfo_all_blocks=1 00:11:22.168 --rc geninfo_unexecuted_blocks=1 00:11:22.168 00:11:22.168 ' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.168 --rc genhtml_branch_coverage=1 00:11:22.168 --rc genhtml_function_coverage=1 00:11:22.168 --rc genhtml_legend=1 00:11:22.168 --rc geninfo_all_blocks=1 00:11:22.168 --rc geninfo_unexecuted_blocks=1 00:11:22.168 00:11:22.168 ' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.168 --rc genhtml_branch_coverage=1 00:11:22.168 --rc genhtml_function_coverage=1 00:11:22.168 --rc genhtml_legend=1 00:11:22.168 --rc geninfo_all_blocks=1 00:11:22.168 --rc geninfo_unexecuted_blocks=1 00:11:22.168 00:11:22.168 ' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.168 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.169 05:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:30.299 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:30.299 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:30.299 Found net devices under 0000:af:00.0: cvl_0_0 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:30.299 Found net devices under 0000:af:00.1: cvl_0_1 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.299 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:11:30.300 00:11:30.300 --- 10.0.0.2 ping statistics --- 00:11:30.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.300 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:11:30.300 00:11:30.300 --- 10.0.0.1 ping statistics --- 00:11:30.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.300 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.300 ************************************ 00:11:30.300 START TEST nvmf_filesystem_no_in_capsule 00:11:30.300 ************************************ 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=384210 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 384210 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 384210 ']' 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.300 05:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.300 [2024-12-09 05:06:11.932234] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:11:30.300 [2024-12-09 05:06:11.932280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.300 [2024-12-09 05:06:12.032151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.300 [2024-12-09 05:06:12.069970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.300 [2024-12-09 05:06:12.070011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.300 [2024-12-09 05:06:12.070020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.300 [2024-12-09 05:06:12.070028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.300 [2024-12-09 05:06:12.070035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.300 [2024-12-09 05:06:12.071774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.300 [2024-12-09 05:06:12.071885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.300 [2024-12-09 05:06:12.071972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.300 [2024-12-09 05:06:12.071973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 [2024-12-09 05:06:12.820876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 [2024-12-09 05:06:12.984200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.561 05:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.561 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.561 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:30.561 { 00:11:30.561 "name": "Malloc1", 00:11:30.561 "aliases": [ 00:11:30.561 "bfe8e5ed-b921-4a0b-a937-62f4bf4e6574" 00:11:30.561 ], 00:11:30.561 "product_name": "Malloc disk", 00:11:30.561 "block_size": 512, 00:11:30.561 "num_blocks": 1048576, 00:11:30.561 "uuid": "bfe8e5ed-b921-4a0b-a937-62f4bf4e6574", 00:11:30.561 "assigned_rate_limits": { 00:11:30.561 "rw_ios_per_sec": 0, 00:11:30.561 "rw_mbytes_per_sec": 0, 00:11:30.561 "r_mbytes_per_sec": 0, 00:11:30.561 "w_mbytes_per_sec": 0 00:11:30.561 }, 00:11:30.561 "claimed": true, 00:11:30.561 "claim_type": "exclusive_write", 00:11:30.561 "zoned": false, 00:11:30.561 "supported_io_types": { 00:11:30.561 "read": true, 00:11:30.561 "write": true, 00:11:30.561 "unmap": true, 00:11:30.561 "flush": true, 00:11:30.561 "reset": true, 00:11:30.561 "nvme_admin": false, 00:11:30.561 "nvme_io": false, 00:11:30.561 "nvme_io_md": false, 00:11:30.561 "write_zeroes": true, 00:11:30.561 "zcopy": true, 00:11:30.561 "get_zone_info": false, 00:11:30.561 "zone_management": false, 00:11:30.561 "zone_append": false, 00:11:30.561 "compare": false, 00:11:30.561 "compare_and_write": false, 00:11:30.561 "abort": true, 00:11:30.561 "seek_hole": false, 00:11:30.561 "seek_data": false, 00:11:30.561 "copy": true, 00:11:30.561 "nvme_iov_md": false 00:11:30.561 }, 00:11:30.561 "memory_domains": [ 00:11:30.561 { 00:11:30.561 "dma_device_id": "system", 00:11:30.561 "dma_device_type": 1 00:11:30.561 }, 00:11:30.561 { 00:11:30.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.561 "dma_device_type": 2 00:11:30.561 } 00:11:30.561 ], 00:11:30.561 "driver_specific": {} 00:11:30.561 } 00:11:30.561 ]' 00:11:30.561 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:30.820 05:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.201 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.201 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.201 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.201 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.201 05:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:34.106 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:34.364 05:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:34.932 05:06:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 ************************************ 00:11:36.317 START TEST filesystem_ext4 00:11:36.317 ************************************ 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:36.317 05:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:36.317 mke2fs 1.47.0 (5-Feb-2023) 00:11:36.317 Discarding device blocks: 0/522240 done 00:11:36.317 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:36.317 Filesystem UUID: 23ba24a9-e943-4147-a747-4ee6c89a8926 00:11:36.317 Superblock backups stored on blocks: 00:11:36.317 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:36.317 00:11:36.317 Allocating group tables: 0/64 done 00:11:36.317 Writing inode tables: 0/64 done 00:11:36.317 Creating journal (8192 blocks): done 00:11:38.107 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:11:38.107 00:11:38.107 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:38.107 05:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 384210 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.662 00:11:43.662 real 0m7.354s 00:11:43.662 user 0m0.027s 00:11:43.662 sys 0m0.087s 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.662 ************************************ 00:11:43.662 END TEST filesystem_ext4 00:11:43.662 ************************************ 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.662 ************************************ 00:11:43.662 START TEST filesystem_btrfs 00:11:43.662 ************************************ 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.662 05:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.921 btrfs-progs v6.8.1 00:11:43.921 See https://btrfs.readthedocs.io for more information. 00:11:43.921 00:11:43.921 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.921 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.921 this does not affect your deployments: 00:11:43.921 - DUP for metadata (-m dup) 00:11:43.921 - enabled no-holes (-O no-holes) 00:11:43.921 - enabled free-space-tree (-R free-space-tree) 00:11:43.921 00:11:43.921 Label: (null) 00:11:43.921 UUID: f1754d13-8345-49c2-ad17-79140f684c1a 00:11:43.921 Node size: 16384 00:11:43.921 Sector size: 4096 (CPU page size: 4096) 00:11:43.921 Filesystem size: 510.00MiB 00:11:43.921 Block group profiles: 00:11:43.921 Data: single 8.00MiB 00:11:43.921 Metadata: DUP 32.00MiB 00:11:43.921 System: DUP 8.00MiB 00:11:43.921 SSD detected: yes 00:11:43.921 Zoned device: no 00:11:43.921 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.921 Checksum: crc32c 00:11:43.921 Number of devices: 1 00:11:43.921 Devices: 00:11:43.921 ID SIZE PATH 00:11:43.921 1 510.00MiB /dev/nvme0n1p1 00:11:43.921 00:11:43.921 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:43.921 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 384210 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.489 00:11:44.489 real 0m1.053s 00:11:44.489 user 0m0.031s 00:11:44.489 sys 0m0.170s 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.489 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.489 ************************************ 00:11:44.489 END TEST filesystem_btrfs 00:11:44.489 ************************************ 00:11:44.749 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:44.749 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.749 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.749 05:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.749 ************************************ 00:11:44.749 START TEST filesystem_xfs 00:11:44.749 ************************************ 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:44.749 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.749 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.749 = sectsz=512 attr=2, projid32bit=1 00:11:44.749 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.749 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.749 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.749 = sunit=0 swidth=0 blks 00:11:44.749 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.749 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.749 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.749 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:45.685 Discarding blocks...Done. 00:11:45.685 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.685 05:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:48.220 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:48.220 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 384210 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:48.221 00:11:48.221 real 0m3.290s 00:11:48.221 user 0m0.026s 00:11:48.221 sys 0m0.128s 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:48.221 ************************************ 00:11:48.221 END TEST filesystem_xfs 00:11:48.221 ************************************ 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 384210 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 384210 ']' 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 384210 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.221 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384210 00:11:48.496 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.496 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.496 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384210' 00:11:48.496 killing process with pid 384210 00:11:48.496 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 384210 00:11:48.497 05:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 384210 00:11:48.756 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.756 00:11:48.756 real 0m19.210s 00:11:48.756 user 1m15.331s 00:11:48.756 sys 0m2.052s 00:11:48.756 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.756 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.756 ************************************ 00:11:48.756 END TEST nvmf_filesystem_no_in_capsule 00:11:48.757 ************************************ 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.757 ************************************ 00:11:48.757 START TEST nvmf_filesystem_in_capsule 00:11:48.757 ************************************ 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=387657 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 387657 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 387657 ']' 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.757 05:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.016 [2024-12-09 05:06:31.228503] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:11:49.016 [2024-12-09 05:06:31.228546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.016 [2024-12-09 05:06:31.325077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.016 [2024-12-09 05:06:31.367890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.016 [2024-12-09 05:06:31.367930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.016 [2024-12-09 05:06:31.367940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.016 [2024-12-09 05:06:31.367949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.016 [2024-12-09 05:06:31.367956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.016 [2024-12-09 05:06:31.369767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.016 [2024-12-09 05:06:31.369876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.016 [2024-12-09 05:06:31.369984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.016 [2024-12-09 05:06:31.369985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.949 [2024-12-09 05:06:32.123136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.949 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 Malloc1 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 [2024-12-09 05:06:32.282671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.950 { 00:11:49.950 "name": "Malloc1", 00:11:49.950 "aliases": [ 00:11:49.950 "e6341fd4-8c76-4e77-bc17-fb090ce5dca4" 00:11:49.950 ], 00:11:49.950 "product_name": "Malloc disk", 00:11:49.950 "block_size": 512, 00:11:49.950 "num_blocks": 1048576, 00:11:49.950 "uuid": "e6341fd4-8c76-4e77-bc17-fb090ce5dca4", 00:11:49.950 "assigned_rate_limits": { 00:11:49.950 "rw_ios_per_sec": 0, 00:11:49.950 "rw_mbytes_per_sec": 0, 00:11:49.950 "r_mbytes_per_sec": 0, 00:11:49.950 "w_mbytes_per_sec": 0 00:11:49.950 }, 00:11:49.950 "claimed": true, 00:11:49.950 "claim_type": "exclusive_write", 00:11:49.950 "zoned": false, 00:11:49.950 "supported_io_types": { 00:11:49.950 "read": true, 00:11:49.950 "write": true, 00:11:49.950 "unmap": true, 00:11:49.950 "flush": true, 00:11:49.950 "reset": true, 00:11:49.950 "nvme_admin": false, 00:11:49.950 "nvme_io": false, 00:11:49.950 "nvme_io_md": false, 00:11:49.950 "write_zeroes": true, 00:11:49.950 "zcopy": true, 00:11:49.950 "get_zone_info": false, 00:11:49.950 "zone_management": false, 00:11:49.950 "zone_append": false, 00:11:49.950 "compare": false, 00:11:49.950 "compare_and_write": false, 00:11:49.950 "abort": true, 00:11:49.950 "seek_hole": false, 00:11:49.950 "seek_data": false, 00:11:49.950 "copy": true, 00:11:49.950 "nvme_iov_md": false 00:11:49.950 }, 00:11:49.950 "memory_domains": [ 00:11:49.950 { 00:11:49.950 "dma_device_id": "system", 00:11:49.950 "dma_device_type": 1 00:11:49.950 }, 00:11:49.950 { 00:11:49.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.950 "dma_device_type": 2 00:11:49.950 } 00:11:49.950 ], 00:11:49.950 "driver_specific": {} 00:11:49.950 } 00:11:49.950 ]' 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.950 05:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.323 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.323 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.323 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.323 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.323 05:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.854 05:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.854 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.854 05:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.234 ************************************ 00:11:55.234 START TEST filesystem_in_capsule_ext4 00:11:55.234 ************************************ 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:55.234 05:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:55.234 mke2fs 1.47.0 (5-Feb-2023) 00:11:55.234 Discarding device blocks: 0/522240 done 00:11:55.234 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:55.234 Filesystem UUID: db44d1e6-73da-4b62-9581-037c5f4677d5 00:11:55.234 Superblock backups stored on blocks: 00:11:55.234 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:55.234 00:11:55.234 Allocating group tables: 0/64 done 00:11:55.234 Writing inode tables: 0/64 done 00:11:55.803 Creating journal (8192 blocks): done 00:11:57.752 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:11:57.752 00:11:57.752 05:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:57.752 05:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 387657 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.019 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.278 00:12:03.278 real 0m8.156s 00:12:03.278 user 0m0.030s 00:12:03.278 sys 0m0.087s 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.278 ************************************ 00:12:03.278 END TEST filesystem_in_capsule_ext4 00:12:03.278 ************************************ 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.278 ************************************ 00:12:03.278 START TEST filesystem_in_capsule_btrfs 00:12:03.278 ************************************ 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.278 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.536 btrfs-progs v6.8.1 00:12:03.536 See https://btrfs.readthedocs.io for more information. 00:12:03.536 00:12:03.536 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.536 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.536 this does not affect your deployments: 00:12:03.536 - DUP for metadata (-m dup) 00:12:03.536 - enabled no-holes (-O no-holes) 00:12:03.536 - enabled free-space-tree (-R free-space-tree) 00:12:03.536 00:12:03.536 Label: (null) 00:12:03.536 UUID: 1c1657ec-5f3e-46e1-ac70-72bf4e035ba9 00:12:03.536 Node size: 16384 00:12:03.536 Sector size: 4096 (CPU page size: 4096) 00:12:03.536 Filesystem size: 510.00MiB 00:12:03.536 Block group profiles: 00:12:03.536 Data: single 8.00MiB 00:12:03.536 Metadata: DUP 32.00MiB 00:12:03.536 System: DUP 8.00MiB 00:12:03.536 SSD detected: yes 00:12:03.536 Zoned device: no 00:12:03.536 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.536 Checksum: crc32c 00:12:03.536 Number of devices: 1 00:12:03.536 Devices: 00:12:03.536 ID SIZE PATH 00:12:03.536 1 510.00MiB /dev/nvme0n1p1 00:12:03.536 00:12:03.536 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.536 05:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 387657 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.796 00:12:03.796 real 0m0.547s 00:12:03.796 user 0m0.033s 00:12:03.796 sys 0m0.127s 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.796 ************************************ 00:12:03.796 END TEST filesystem_in_capsule_btrfs 00:12:03.796 ************************************ 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.796 ************************************ 00:12:03.796 START TEST filesystem_in_capsule_xfs 00:12:03.796 ************************************ 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.796 05:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:04.056 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:04.056 = sectsz=512 attr=2, projid32bit=1 00:12:04.056 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:04.056 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:04.056 data = bsize=4096 blocks=130560, imaxpct=25 00:12:04.056 = sunit=0 swidth=0 blks 00:12:04.056 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:04.056 log =internal log bsize=4096 blocks=16384, version=2 00:12:04.056 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:04.056 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.623 Discarding blocks...Done. 00:12:04.623 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.623 05:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 387657 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.160 00:12:07.160 real 0m3.250s 00:12:07.160 user 0m0.029s 00:12:07.160 sys 0m0.084s 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.160 ************************************ 00:12:07.160 END TEST filesystem_in_capsule_xfs 00:12:07.160 ************************************ 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:07.160 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 387657 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 387657 ']' 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 387657 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387657 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387657' 00:12:07.419 killing process with pid 387657 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 387657 00:12:07.419 05:06:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 387657 00:12:07.678 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.678 00:12:07.678 real 0m18.977s 00:12:07.678 user 1m14.392s 00:12:07.678 sys 0m1.979s 00:12:07.678 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.678 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.937 ************************************ 00:12:07.937 END TEST nvmf_filesystem_in_capsule 00:12:07.937 ************************************ 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.937 rmmod nvme_tcp 00:12:07.937 rmmod nvme_fabrics 00:12:07.937 rmmod nvme_keyring 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.937 05:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:10.469 00:12:10.469 real 0m48.353s 00:12:10.469 user 2m32.060s 00:12:10.469 sys 0m9.931s 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.469 ************************************ 00:12:10.469 END TEST nvmf_filesystem 00:12:10.469 ************************************ 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.469 ************************************ 00:12:10.469 START TEST nvmf_target_discovery 00:12:10.469 ************************************ 00:12:10.469 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:10.469 * Looking for test storage... 00:12:10.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.470 --rc genhtml_branch_coverage=1 00:12:10.470 --rc genhtml_function_coverage=1 00:12:10.470 --rc genhtml_legend=1 00:12:10.470 --rc geninfo_all_blocks=1 00:12:10.470 --rc geninfo_unexecuted_blocks=1 00:12:10.470 00:12:10.470 ' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.470 --rc genhtml_branch_coverage=1 00:12:10.470 --rc genhtml_function_coverage=1 00:12:10.470 --rc genhtml_legend=1 00:12:10.470 --rc geninfo_all_blocks=1 00:12:10.470 --rc geninfo_unexecuted_blocks=1 00:12:10.470 00:12:10.470 ' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.470 --rc genhtml_branch_coverage=1 00:12:10.470 --rc genhtml_function_coverage=1 00:12:10.470 --rc genhtml_legend=1 00:12:10.470 --rc geninfo_all_blocks=1 00:12:10.470 --rc geninfo_unexecuted_blocks=1 00:12:10.470 00:12:10.470 ' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.470 --rc genhtml_branch_coverage=1 00:12:10.470 --rc genhtml_function_coverage=1 00:12:10.470 --rc genhtml_legend=1 00:12:10.470 --rc geninfo_all_blocks=1 00:12:10.470 --rc geninfo_unexecuted_blocks=1 00:12:10.470 00:12:10.470 ' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.470 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.471 05:06:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:18.600 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:18.600 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.600 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:18.601 Found net devices under 0000:af:00.0: cvl_0_0 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:18.601 Found net devices under 0000:af:00.1: cvl_0_1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:12:18.601 00:12:18.601 --- 10.0.0.2 ping statistics --- 00:12:18.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.601 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:12:18.601 05:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:18.601 00:12:18.601 --- 10.0.0.1 ping statistics --- 00:12:18.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.601 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=394871 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 394871 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 394871 ']' 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 [2024-12-09 05:07:00.118873] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:12:18.601 [2024-12-09 05:07:00.118926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.601 [2024-12-09 05:07:00.219313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.601 [2024-12-09 05:07:00.259170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.601 [2024-12-09 05:07:00.259215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.601 [2024-12-09 05:07:00.259231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.601 [2024-12-09 05:07:00.259242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.601 [2024-12-09 05:07:00.259253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.601 [2024-12-09 05:07:00.260987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.601 [2024-12-09 05:07:00.261024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.601 [2024-12-09 05:07:00.261133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.601 [2024-12-09 05:07:00.261133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.601 05:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.601 [2024-12-09 05:07:01.013674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:18.601 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 Null1 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.602 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.861 [2024-12-09 05:07:01.080386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.861 Null2 00:12:18.861 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 Null3 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 Null4 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.862 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:19.121 00:12:19.121 Discovery Log Number of Records 6, Generation counter 6 00:12:19.121 =====Discovery Log Entry 0====== 00:12:19.121 trtype: tcp 00:12:19.121 adrfam: ipv4 00:12:19.121 subtype: current discovery subsystem 00:12:19.121 treq: not required 00:12:19.121 portid: 0 00:12:19.121 trsvcid: 4420 00:12:19.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:19.121 traddr: 10.0.0.2 00:12:19.121 eflags: explicit discovery connections, duplicate discovery information 00:12:19.121 sectype: none 00:12:19.121 =====Discovery Log Entry 1====== 00:12:19.121 trtype: tcp 00:12:19.121 adrfam: ipv4 00:12:19.121 subtype: nvme subsystem 00:12:19.121 treq: not required 00:12:19.121 portid: 0 00:12:19.121 trsvcid: 4420 00:12:19.121 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:19.121 traddr: 10.0.0.2 00:12:19.121 eflags: none 00:12:19.121 sectype: none 00:12:19.121 =====Discovery Log Entry 2====== 00:12:19.121 trtype: tcp 00:12:19.121 adrfam: ipv4 00:12:19.121 subtype: nvme subsystem 00:12:19.121 treq: not required 00:12:19.121 portid: 0 00:12:19.121 trsvcid: 4420 00:12:19.121 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:19.121 traddr: 10.0.0.2 00:12:19.121 eflags: none 00:12:19.121 sectype: none 00:12:19.121 =====Discovery Log Entry 3====== 00:12:19.121 trtype: tcp 00:12:19.121 adrfam: ipv4 00:12:19.121 subtype: nvme subsystem 00:12:19.121 treq: not required 00:12:19.121 portid: 0 00:12:19.121 trsvcid: 4420 00:12:19.121 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:19.121 traddr: 10.0.0.2 00:12:19.121 eflags: none 00:12:19.121 sectype: none 00:12:19.121 =====Discovery Log Entry 4====== 00:12:19.121 trtype: tcp 00:12:19.121 adrfam: ipv4 00:12:19.122 subtype: nvme subsystem 00:12:19.122 treq: not required 00:12:19.122 portid: 0 00:12:19.122 trsvcid: 4420 00:12:19.122 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:19.122 traddr: 10.0.0.2 00:12:19.122 eflags: none 00:12:19.122 sectype: none 00:12:19.122 =====Discovery Log Entry 5====== 00:12:19.122 trtype: tcp 00:12:19.122 adrfam: ipv4 00:12:19.122 subtype: discovery subsystem referral 00:12:19.122 treq: not required 00:12:19.122 portid: 0 00:12:19.122 trsvcid: 4430 00:12:19.122 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:19.122 traddr: 10.0.0.2 00:12:19.122 eflags: none 00:12:19.122 sectype: none 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:19.122 Perform nvmf subsystem discovery via RPC 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 [ 00:12:19.122 { 00:12:19.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:19.122 "subtype": "Discovery", 00:12:19.122 "listen_addresses": [ 00:12:19.122 { 00:12:19.122 "trtype": "TCP", 00:12:19.122 "adrfam": "IPv4", 00:12:19.122 "traddr": "10.0.0.2", 00:12:19.122 "trsvcid": "4420" 00:12:19.122 } 00:12:19.122 ], 00:12:19.122 "allow_any_host": true, 00:12:19.122 "hosts": [] 00:12:19.122 }, 00:12:19.122 { 00:12:19.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:19.122 "subtype": "NVMe", 00:12:19.122 "listen_addresses": [ 00:12:19.122 { 00:12:19.122 "trtype": "TCP", 00:12:19.122 "adrfam": "IPv4", 00:12:19.122 "traddr": "10.0.0.2", 00:12:19.122 "trsvcid": "4420" 00:12:19.122 } 00:12:19.122 ], 00:12:19.122 "allow_any_host": true, 00:12:19.122 "hosts": [], 00:12:19.122 "serial_number": "SPDK00000000000001", 00:12:19.122 "model_number": "SPDK bdev Controller", 00:12:19.122 "max_namespaces": 32, 00:12:19.122 "min_cntlid": 1, 00:12:19.122 "max_cntlid": 65519, 00:12:19.122 "namespaces": [ 00:12:19.122 { 00:12:19.122 "nsid": 1, 00:12:19.122 "bdev_name": "Null1", 00:12:19.122 "name": "Null1", 00:12:19.122 "nguid": "B03FC5A1B53A4DFEA13593FF1A4504EE", 00:12:19.122 "uuid": "b03fc5a1-b53a-4dfe-a135-93ff1a4504ee" 00:12:19.122 } 00:12:19.122 ] 00:12:19.122 }, 00:12:19.122 { 00:12:19.122 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:19.122 "subtype": "NVMe", 00:12:19.122 "listen_addresses": [ 00:12:19.122 { 00:12:19.122 "trtype": "TCP", 00:12:19.122 "adrfam": "IPv4", 00:12:19.122 "traddr": "10.0.0.2", 00:12:19.122 "trsvcid": "4420" 00:12:19.122 } 00:12:19.122 ], 00:12:19.122 "allow_any_host": true, 00:12:19.122 "hosts": [], 00:12:19.122 "serial_number": "SPDK00000000000002", 00:12:19.122 "model_number": "SPDK bdev Controller", 00:12:19.122 "max_namespaces": 32, 00:12:19.122 "min_cntlid": 1, 00:12:19.122 "max_cntlid": 65519, 00:12:19.122 "namespaces": [ 00:12:19.122 { 00:12:19.122 "nsid": 1, 00:12:19.122 "bdev_name": "Null2", 00:12:19.122 "name": "Null2", 00:12:19.122 "nguid": "5831AB3ED8674BEB825DC6BA18283E2F", 00:12:19.122 "uuid": "5831ab3e-d867-4beb-825d-c6ba18283e2f" 00:12:19.122 } 00:12:19.122 ] 00:12:19.122 }, 00:12:19.122 { 00:12:19.122 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:19.122 "subtype": "NVMe", 00:12:19.122 "listen_addresses": [ 00:12:19.122 { 00:12:19.122 "trtype": "TCP", 00:12:19.122 "adrfam": "IPv4", 00:12:19.122 "traddr": "10.0.0.2", 00:12:19.122 "trsvcid": "4420" 00:12:19.122 } 00:12:19.122 ], 00:12:19.122 "allow_any_host": true, 00:12:19.122 "hosts": [], 00:12:19.122 "serial_number": "SPDK00000000000003", 00:12:19.122 "model_number": "SPDK bdev Controller", 00:12:19.122 "max_namespaces": 32, 00:12:19.122 "min_cntlid": 1, 00:12:19.122 "max_cntlid": 65519, 00:12:19.122 "namespaces": [ 00:12:19.122 { 00:12:19.122 "nsid": 1, 00:12:19.122 "bdev_name": "Null3", 00:12:19.122 "name": "Null3", 00:12:19.122 "nguid": "F458C1612FC2420FAA24555362652413", 00:12:19.122 "uuid": "f458c161-2fc2-420f-aa24-555362652413" 00:12:19.122 } 00:12:19.122 ] 00:12:19.122 }, 00:12:19.122 { 00:12:19.122 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:19.122 "subtype": "NVMe", 00:12:19.122 "listen_addresses": [ 00:12:19.122 { 00:12:19.122 "trtype": "TCP", 00:12:19.122 "adrfam": "IPv4", 00:12:19.122 "traddr": "10.0.0.2", 00:12:19.122 "trsvcid": "4420" 00:12:19.122 } 00:12:19.122 ], 00:12:19.122 "allow_any_host": true, 00:12:19.122 "hosts": [], 00:12:19.122 "serial_number": "SPDK00000000000004", 00:12:19.122 "model_number": "SPDK bdev Controller", 00:12:19.122 "max_namespaces": 32, 00:12:19.122 "min_cntlid": 1, 00:12:19.122 "max_cntlid": 65519, 00:12:19.122 "namespaces": [ 00:12:19.122 { 00:12:19.122 "nsid": 1, 00:12:19.122 "bdev_name": "Null4", 00:12:19.122 "name": "Null4", 00:12:19.122 "nguid": "754C58D0CE20462BBA653AABE186BDD9", 00:12:19.122 "uuid": "754c58d0-ce20-462b-ba65-3aabe186bdd9" 00:12:19.122 } 00:12:19.122 ] 00:12:19.122 } 00:12:19.122 ] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:19.122 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.123 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.123 rmmod nvme_tcp 00:12:19.383 rmmod nvme_fabrics 00:12:19.383 rmmod nvme_keyring 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 394871 ']' 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 394871 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 394871 ']' 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 394871 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394871 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394871' 00:12:19.383 killing process with pid 394871 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 394871 00:12:19.383 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 394871 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.642 05:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.551 05:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.551 00:12:21.551 real 0m11.528s 00:12:21.551 user 0m8.827s 00:12:21.551 sys 0m6.093s 00:12:21.551 05:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.551 05:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.551 ************************************ 00:12:21.551 END TEST nvmf_target_discovery 00:12:21.551 ************************************ 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.811 ************************************ 00:12:21.811 START TEST nvmf_referrals 00:12:21.811 ************************************ 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:21.811 * Looking for test storage... 00:12:21.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.811 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:21.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.811 --rc genhtml_branch_coverage=1 00:12:21.811 --rc genhtml_function_coverage=1 00:12:21.811 --rc genhtml_legend=1 00:12:21.811 --rc geninfo_all_blocks=1 00:12:21.811 --rc geninfo_unexecuted_blocks=1 00:12:21.812 00:12:21.812 ' 00:12:21.812 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.812 --rc genhtml_branch_coverage=1 00:12:21.812 --rc genhtml_function_coverage=1 00:12:21.812 --rc genhtml_legend=1 00:12:21.812 --rc geninfo_all_blocks=1 00:12:21.812 --rc geninfo_unexecuted_blocks=1 00:12:21.812 00:12:21.812 ' 00:12:21.812 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.812 --rc genhtml_branch_coverage=1 00:12:21.812 --rc genhtml_function_coverage=1 00:12:21.812 --rc genhtml_legend=1 00:12:21.812 --rc geninfo_all_blocks=1 00:12:21.812 --rc geninfo_unexecuted_blocks=1 00:12:21.812 00:12:21.812 ' 00:12:21.812 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:21.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.812 --rc genhtml_branch_coverage=1 00:12:21.812 --rc genhtml_function_coverage=1 00:12:21.812 --rc genhtml_legend=1 00:12:21.812 --rc geninfo_all_blocks=1 00:12:21.812 --rc geninfo_unexecuted_blocks=1 00:12:21.812 00:12:21.812 ' 00:12:21.812 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.812 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.072 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.073 05:07:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:30.212 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.212 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:30.213 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:30.213 Found net devices under 0000:af:00.0: cvl_0_0 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:30.213 Found net devices under 0000:af:00.1: cvl_0_1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:12:30.213 00:12:30.213 --- 10.0.0.2 ping statistics --- 00:12:30.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.213 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:30.213 00:12:30.213 --- 10.0.0.1 ping statistics --- 00:12:30.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.213 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=399017 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 399017 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 399017 ']' 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.213 05:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.213 [2024-12-09 05:07:11.698129] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:12:30.213 [2024-12-09 05:07:11.698176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.213 [2024-12-09 05:07:11.794505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.213 [2024-12-09 05:07:11.835778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.213 [2024-12-09 05:07:11.835815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.213 [2024-12-09 05:07:11.835830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.213 [2024-12-09 05:07:11.835841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.213 [2024-12-09 05:07:11.835851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.213 [2024-12-09 05:07:11.837511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.213 [2024-12-09 05:07:11.837620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.213 [2024-12-09 05:07:11.837728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.214 [2024-12-09 05:07:11.837729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 [2024-12-09 05:07:12.581349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 [2024-12-09 05:07:12.604328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:30.214 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:30.473 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.474 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.733 05:07:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.733 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.993 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.252 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.512 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.772 05:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.772 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:32.031 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:32.290 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.291 rmmod nvme_tcp 00:12:32.291 rmmod nvme_fabrics 00:12:32.291 rmmod nvme_keyring 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 399017 ']' 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 399017 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 399017 ']' 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 399017 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.291 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399017 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399017' 00:12:32.550 killing process with pid 399017 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 399017 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 399017 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.550 05:07:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.098 00:12:35.098 real 0m12.969s 00:12:35.098 user 0m15.334s 00:12:35.098 sys 0m6.564s 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.098 ************************************ 00:12:35.098 END TEST nvmf_referrals 00:12:35.098 ************************************ 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.098 ************************************ 00:12:35.098 START TEST nvmf_connect_disconnect 00:12:35.098 ************************************ 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:35.098 * Looking for test storage... 00:12:35.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.098 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.099 --rc genhtml_branch_coverage=1 00:12:35.099 --rc genhtml_function_coverage=1 00:12:35.099 --rc genhtml_legend=1 00:12:35.099 --rc geninfo_all_blocks=1 00:12:35.099 --rc geninfo_unexecuted_blocks=1 00:12:35.099 00:12:35.099 ' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.099 --rc genhtml_branch_coverage=1 00:12:35.099 --rc genhtml_function_coverage=1 00:12:35.099 --rc genhtml_legend=1 00:12:35.099 --rc geninfo_all_blocks=1 00:12:35.099 --rc geninfo_unexecuted_blocks=1 00:12:35.099 00:12:35.099 ' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.099 --rc genhtml_branch_coverage=1 00:12:35.099 --rc genhtml_function_coverage=1 00:12:35.099 --rc genhtml_legend=1 00:12:35.099 --rc geninfo_all_blocks=1 00:12:35.099 --rc geninfo_unexecuted_blocks=1 00:12:35.099 00:12:35.099 ' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.099 --rc genhtml_branch_coverage=1 00:12:35.099 --rc genhtml_function_coverage=1 00:12:35.099 --rc genhtml_legend=1 00:12:35.099 --rc geninfo_all_blocks=1 00:12:35.099 --rc geninfo_unexecuted_blocks=1 00:12:35.099 00:12:35.099 ' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.099 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.100 05:07:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:43.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:43.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.237 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:43.238 Found net devices under 0000:af:00.0: cvl_0_0 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:43.238 Found net devices under 0000:af:00.1: cvl_0_1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:12:43.238 00:12:43.238 --- 10.0.0.2 ping statistics --- 00:12:43.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.238 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:12:43.238 00:12:43.238 --- 10.0.0.1 ping statistics --- 00:12:43.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.238 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=403360 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 403360 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 403360 ']' 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.238 05:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.238 [2024-12-09 05:07:24.742392] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:12:43.238 [2024-12-09 05:07:24.742441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.238 [2024-12-09 05:07:24.843266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.238 [2024-12-09 05:07:24.885730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.238 [2024-12-09 05:07:24.885768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.238 [2024-12-09 05:07:24.885784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.238 [2024-12-09 05:07:24.885795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.238 [2024-12-09 05:07:24.885805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.238 [2024-12-09 05:07:24.887477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.238 [2024-12-09 05:07:24.887510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.238 [2024-12-09 05:07:24.887720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.238 [2024-12-09 05:07:24.887724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:43.238 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 [2024-12-09 05:07:25.640554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.239 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.239 [2024-12-09 05:07:25.702745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.497 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.497 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:43.497 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:43.497 05:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:46.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.845 05:07:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.845 rmmod nvme_tcp 00:13:00.845 rmmod nvme_fabrics 00:13:00.845 rmmod nvme_keyring 00:13:00.845 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 403360 ']' 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 403360 ']' 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403360' 00:13:00.846 killing process with pid 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 403360 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:00.846 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.105 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.105 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.105 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.105 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.105 05:07:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.014 00:13:03.014 real 0m28.257s 00:13:03.014 user 1m14.845s 00:13:03.014 sys 0m7.552s 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.014 ************************************ 00:13:03.014 END TEST nvmf_connect_disconnect 00:13:03.014 ************************************ 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.014 05:07:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 ************************************ 00:13:03.272 START TEST nvmf_multitarget 00:13:03.272 ************************************ 00:13:03.272 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.272 * Looking for test storage... 00:13:03.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.272 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:03.272 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:03.272 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:03.272 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.273 --rc genhtml_branch_coverage=1 00:13:03.273 --rc genhtml_function_coverage=1 00:13:03.273 --rc genhtml_legend=1 00:13:03.273 --rc geninfo_all_blocks=1 00:13:03.273 --rc geninfo_unexecuted_blocks=1 00:13:03.273 00:13:03.273 ' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.273 05:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:11.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:11.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.397 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:11.398 Found net devices under 0000:af:00.0: cvl_0_0 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:11.398 Found net devices under 0000:af:00.1: cvl_0_1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.398 05:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:13:11.398 00:13:11.398 --- 10.0.0.2 ping statistics --- 00:13:11.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.398 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:11.398 00:13:11.398 --- 10.0.0.1 ping statistics --- 00:13:11.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.398 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=410359 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 410359 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 410359 ']' 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.398 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.398 [2024-12-09 05:07:53.148132] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:13:11.398 [2024-12-09 05:07:53.148180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.398 [2024-12-09 05:07:53.245559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.398 [2024-12-09 05:07:53.287918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.398 [2024-12-09 05:07:53.287957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.398 [2024-12-09 05:07:53.287971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.398 [2024-12-09 05:07:53.287982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.398 [2024-12-09 05:07:53.287992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.398 [2024-12-09 05:07:53.290024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.398 [2024-12-09 05:07:53.290053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.398 [2024-12-09 05:07:53.290161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.398 [2024-12-09 05:07:53.290162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.658 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.658 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:11.658 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.658 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.658 05:07:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.658 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.658 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.658 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.658 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:11.917 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:11.917 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:11.917 "nvmf_tgt_1" 00:13:11.917 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:11.917 "nvmf_tgt_2" 00:13:11.917 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.917 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:12.177 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:12.177 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:12.177 true 00:13:12.177 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:12.436 true 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.436 rmmod nvme_tcp 00:13:12.436 rmmod nvme_fabrics 00:13:12.436 rmmod nvme_keyring 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 410359 ']' 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 410359 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 410359 ']' 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 410359 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.436 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 410359 00:13:12.697 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.697 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.697 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 410359' 00:13:12.697 killing process with pid 410359 00:13:12.697 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 410359 00:13:12.697 05:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 410359 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.697 05:07:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.230 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.230 00:13:15.231 real 0m11.724s 00:13:15.231 user 0m10.083s 00:13:15.231 sys 0m6.267s 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.231 ************************************ 00:13:15.231 END TEST nvmf_multitarget 00:13:15.231 ************************************ 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.231 ************************************ 00:13:15.231 START TEST nvmf_rpc 00:13:15.231 ************************************ 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.231 * Looking for test storage... 00:13:15.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.231 --rc genhtml_branch_coverage=1 00:13:15.231 --rc genhtml_function_coverage=1 00:13:15.231 --rc genhtml_legend=1 00:13:15.231 --rc geninfo_all_blocks=1 00:13:15.231 --rc geninfo_unexecuted_blocks=1 00:13:15.231 00:13:15.231 ' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.231 --rc genhtml_branch_coverage=1 00:13:15.231 --rc genhtml_function_coverage=1 00:13:15.231 --rc genhtml_legend=1 00:13:15.231 --rc geninfo_all_blocks=1 00:13:15.231 --rc geninfo_unexecuted_blocks=1 00:13:15.231 00:13:15.231 ' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.231 --rc genhtml_branch_coverage=1 00:13:15.231 --rc genhtml_function_coverage=1 00:13:15.231 --rc genhtml_legend=1 00:13:15.231 --rc geninfo_all_blocks=1 00:13:15.231 --rc geninfo_unexecuted_blocks=1 00:13:15.231 00:13:15.231 ' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.231 --rc genhtml_branch_coverage=1 00:13:15.231 --rc genhtml_function_coverage=1 00:13:15.231 --rc genhtml_legend=1 00:13:15.231 --rc geninfo_all_blocks=1 00:13:15.231 --rc geninfo_unexecuted_blocks=1 00:13:15.231 00:13:15.231 ' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.231 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.232 05:07:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:23.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:23.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:23.355 Found net devices under 0000:af:00.0: cvl_0_0 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:23.355 Found net devices under 0000:af:00.1: cvl_0_1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:13:23.355 00:13:23.355 --- 10.0.0.2 ping statistics --- 00:13:23.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.355 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:13:23.355 00:13:23.355 --- 10.0.0.1 ping statistics --- 00:13:23.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.355 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=414373 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 414373 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 414373 ']' 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.355 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.356 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.356 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.356 05:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 [2024-12-09 05:08:04.960973] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:13:23.356 [2024-12-09 05:08:04.961025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.356 [2024-12-09 05:08:05.056204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.356 [2024-12-09 05:08:05.096426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.356 [2024-12-09 05:08:05.096463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.356 [2024-12-09 05:08:05.096478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.356 [2024-12-09 05:08:05.096489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.356 [2024-12-09 05:08:05.096498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.356 [2024-12-09 05:08:05.098415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.356 [2024-12-09 05:08:05.098525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.356 [2024-12-09 05:08:05.098552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.356 [2024-12-09 05:08:05.098553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.356 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.356 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:23.356 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.356 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.356 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:23.614 "tick_rate": 2500000000, 00:13:23.614 "poll_groups": [ 00:13:23.614 { 00:13:23.614 "name": "nvmf_tgt_poll_group_000", 00:13:23.614 "admin_qpairs": 0, 00:13:23.614 "io_qpairs": 0, 00:13:23.614 "current_admin_qpairs": 0, 00:13:23.614 "current_io_qpairs": 0, 00:13:23.614 "pending_bdev_io": 0, 00:13:23.614 "completed_nvme_io": 0, 00:13:23.614 "transports": [] 00:13:23.614 }, 00:13:23.614 { 00:13:23.614 "name": "nvmf_tgt_poll_group_001", 00:13:23.614 "admin_qpairs": 0, 00:13:23.614 "io_qpairs": 0, 00:13:23.614 "current_admin_qpairs": 0, 00:13:23.614 "current_io_qpairs": 0, 00:13:23.614 "pending_bdev_io": 0, 00:13:23.614 "completed_nvme_io": 0, 00:13:23.614 "transports": [] 00:13:23.614 }, 00:13:23.614 { 00:13:23.614 "name": "nvmf_tgt_poll_group_002", 00:13:23.614 "admin_qpairs": 0, 00:13:23.614 "io_qpairs": 0, 00:13:23.614 "current_admin_qpairs": 0, 00:13:23.614 "current_io_qpairs": 0, 00:13:23.614 "pending_bdev_io": 0, 00:13:23.614 "completed_nvme_io": 0, 00:13:23.614 "transports": [] 00:13:23.614 }, 00:13:23.614 { 00:13:23.614 "name": "nvmf_tgt_poll_group_003", 00:13:23.614 "admin_qpairs": 0, 00:13:23.614 "io_qpairs": 0, 00:13:23.614 "current_admin_qpairs": 0, 00:13:23.614 "current_io_qpairs": 0, 00:13:23.614 "pending_bdev_io": 0, 00:13:23.614 "completed_nvme_io": 0, 00:13:23.614 "transports": [] 00:13:23.614 } 00:13:23.614 ] 00:13:23.614 }' 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:23.614 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.615 [2024-12-09 05:08:05.960204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:23.615 "tick_rate": 2500000000, 00:13:23.615 "poll_groups": [ 00:13:23.615 { 00:13:23.615 "name": "nvmf_tgt_poll_group_000", 00:13:23.615 "admin_qpairs": 0, 00:13:23.615 "io_qpairs": 0, 00:13:23.615 "current_admin_qpairs": 0, 00:13:23.615 "current_io_qpairs": 0, 00:13:23.615 "pending_bdev_io": 0, 00:13:23.615 "completed_nvme_io": 0, 00:13:23.615 "transports": [ 00:13:23.615 { 00:13:23.615 "trtype": "TCP" 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 }, 00:13:23.615 { 00:13:23.615 "name": "nvmf_tgt_poll_group_001", 00:13:23.615 "admin_qpairs": 0, 00:13:23.615 "io_qpairs": 0, 00:13:23.615 "current_admin_qpairs": 0, 00:13:23.615 "current_io_qpairs": 0, 00:13:23.615 "pending_bdev_io": 0, 00:13:23.615 "completed_nvme_io": 0, 00:13:23.615 "transports": [ 00:13:23.615 { 00:13:23.615 "trtype": "TCP" 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 }, 00:13:23.615 { 00:13:23.615 "name": "nvmf_tgt_poll_group_002", 00:13:23.615 "admin_qpairs": 0, 00:13:23.615 "io_qpairs": 0, 00:13:23.615 "current_admin_qpairs": 0, 00:13:23.615 "current_io_qpairs": 0, 00:13:23.615 "pending_bdev_io": 0, 00:13:23.615 "completed_nvme_io": 0, 00:13:23.615 "transports": [ 00:13:23.615 { 00:13:23.615 "trtype": "TCP" 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 }, 00:13:23.615 { 00:13:23.615 "name": "nvmf_tgt_poll_group_003", 00:13:23.615 "admin_qpairs": 0, 00:13:23.615 "io_qpairs": 0, 00:13:23.615 "current_admin_qpairs": 0, 00:13:23.615 "current_io_qpairs": 0, 00:13:23.615 "pending_bdev_io": 0, 00:13:23.615 "completed_nvme_io": 0, 00:13:23.615 "transports": [ 00:13:23.615 { 00:13:23.615 "trtype": "TCP" 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 } 00:13:23.615 ] 00:13:23.615 }' 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.615 05:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.615 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:23.615 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.615 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.615 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.615 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.873 Malloc1 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.873 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 [2024-12-09 05:08:06.153518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:23.874 [2024-12-09 05:08:06.182066] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:13:23.874 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:23.874 could not add new controller: failed to write to nvme-fabrics device 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.874 05:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.249 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.249 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:25.249 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.249 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:25.249 05:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:27.151 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.410 [2024-12-09 05:08:09.810759] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:13:27.410 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:27.410 could not add new controller: failed to write to nvme-fabrics device 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.410 05:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.790 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.790 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:28.790 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.790 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:28.790 05:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:30.689 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.946 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.947 [2024-12-09 05:08:13.330011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.947 05:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.322 05:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.322 05:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.322 05:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.322 05:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:32.322 05:08:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:34.221 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 [2024-12-09 05:08:16.815565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.480 05:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.856 05:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.856 05:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.856 05:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.856 05:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:35.856 05:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.768 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.027 [2024-12-09 05:08:20.259038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.027 05:08:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.403 05:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.403 05:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.403 05:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.403 05:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:39.403 05:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 [2024-12-09 05:08:23.747677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.301 05:08:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.716 05:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.716 05:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:42.716 05:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.716 05:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:42.716 05:08:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 [2024-12-09 05:08:27.422471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.254 05:08:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.645 05:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.645 05:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:46.645 05:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.645 05:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:46.645 05:08:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.548 [2024-12-09 05:08:30.977498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.548 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 05:08:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 [2024-12-09 05:08:31.025605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 [2024-12-09 05:08:31.073736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.809 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 [2024-12-09 05:08:31.121899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 [2024-12-09 05:08:31.170053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:48.810 "tick_rate": 2500000000, 00:13:48.810 "poll_groups": [ 00:13:48.810 { 00:13:48.810 "name": "nvmf_tgt_poll_group_000", 00:13:48.810 "admin_qpairs": 2, 00:13:48.810 "io_qpairs": 196, 00:13:48.810 "current_admin_qpairs": 0, 00:13:48.810 "current_io_qpairs": 0, 00:13:48.810 "pending_bdev_io": 0, 00:13:48.810 "completed_nvme_io": 301, 00:13:48.810 "transports": [ 00:13:48.810 { 00:13:48.810 "trtype": "TCP" 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 }, 00:13:48.810 { 00:13:48.810 "name": "nvmf_tgt_poll_group_001", 00:13:48.810 "admin_qpairs": 2, 00:13:48.810 "io_qpairs": 196, 00:13:48.810 "current_admin_qpairs": 0, 00:13:48.810 "current_io_qpairs": 0, 00:13:48.810 "pending_bdev_io": 0, 00:13:48.810 "completed_nvme_io": 296, 00:13:48.810 "transports": [ 00:13:48.810 { 00:13:48.810 "trtype": "TCP" 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 }, 00:13:48.810 { 00:13:48.810 "name": "nvmf_tgt_poll_group_002", 00:13:48.810 "admin_qpairs": 1, 00:13:48.810 "io_qpairs": 196, 00:13:48.810 "current_admin_qpairs": 0, 00:13:48.810 "current_io_qpairs": 0, 00:13:48.810 "pending_bdev_io": 0, 00:13:48.810 "completed_nvme_io": 248, 00:13:48.810 "transports": [ 00:13:48.810 { 00:13:48.810 "trtype": "TCP" 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 }, 00:13:48.810 { 00:13:48.810 "name": "nvmf_tgt_poll_group_003", 00:13:48.810 "admin_qpairs": 2, 00:13:48.810 "io_qpairs": 196, 00:13:48.810 "current_admin_qpairs": 0, 00:13:48.810 "current_io_qpairs": 0, 00:13:48.810 "pending_bdev_io": 0, 00:13:48.810 "completed_nvme_io": 289, 00:13:48.810 "transports": [ 00:13:48.810 { 00:13:48.810 "trtype": "TCP" 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 } 00:13:48.810 ] 00:13:48.810 }' 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:48.810 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.070 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.070 rmmod nvme_tcp 00:13:49.071 rmmod nvme_fabrics 00:13:49.071 rmmod nvme_keyring 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 414373 ']' 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 414373 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 414373 ']' 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 414373 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 414373 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 414373' 00:13:49.071 killing process with pid 414373 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 414373 00:13:49.071 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 414373 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.330 05:08:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.867 00:13:51.867 real 0m36.456s 00:13:51.867 user 1m47.316s 00:13:51.867 sys 0m8.636s 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.867 ************************************ 00:13:51.867 END TEST nvmf_rpc 00:13:51.867 ************************************ 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.867 ************************************ 00:13:51.867 START TEST nvmf_invalid 00:13:51.867 ************************************ 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.867 * Looking for test storage... 00:13:51.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:51.867 05:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.867 --rc genhtml_branch_coverage=1 00:13:51.867 --rc genhtml_function_coverage=1 00:13:51.867 --rc genhtml_legend=1 00:13:51.867 --rc geninfo_all_blocks=1 00:13:51.867 --rc geninfo_unexecuted_blocks=1 00:13:51.867 00:13:51.867 ' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.867 --rc genhtml_branch_coverage=1 00:13:51.867 --rc genhtml_function_coverage=1 00:13:51.867 --rc genhtml_legend=1 00:13:51.867 --rc geninfo_all_blocks=1 00:13:51.867 --rc geninfo_unexecuted_blocks=1 00:13:51.867 00:13:51.867 ' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.867 --rc genhtml_branch_coverage=1 00:13:51.867 --rc genhtml_function_coverage=1 00:13:51.867 --rc genhtml_legend=1 00:13:51.867 --rc geninfo_all_blocks=1 00:13:51.867 --rc geninfo_unexecuted_blocks=1 00:13:51.867 00:13:51.867 ' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:51.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.867 --rc genhtml_branch_coverage=1 00:13:51.867 --rc genhtml_function_coverage=1 00:13:51.867 --rc genhtml_legend=1 00:13:51.867 --rc geninfo_all_blocks=1 00:13:51.867 --rc geninfo_unexecuted_blocks=1 00:13:51.867 00:13:51.867 ' 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:51.867 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.868 05:08:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.003 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:00.004 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:00.004 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:00.004 Found net devices under 0000:af:00.0: cvl_0_0 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:00.004 Found net devices under 0000:af:00.1: cvl_0_1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.004 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:14:00.004 00:14:00.005 --- 10.0.0.2 ping statistics --- 00:14:00.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.005 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:14:00.005 00:14:00.005 --- 10.0.0.1 ping statistics --- 00:14:00.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.005 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=422796 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 422796 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 422796 ']' 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.005 05:08:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.005 [2024-12-09 05:08:41.443667] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:00.005 [2024-12-09 05:08:41.443720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.005 [2024-12-09 05:08:41.543402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.005 [2024-12-09 05:08:41.584169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.005 [2024-12-09 05:08:41.584216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.005 [2024-12-09 05:08:41.584230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.005 [2024-12-09 05:08:41.584241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.005 [2024-12-09 05:08:41.584251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.005 [2024-12-09 05:08:41.586079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.005 [2024-12-09 05:08:41.586189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.005 [2024-12-09 05:08:41.586296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.005 [2024-12-09 05:08:41.586297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:00.005 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14502 00:14:00.265 [2024-12-09 05:08:42.494866] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:00.265 { 00:14:00.265 "nqn": "nqn.2016-06.io.spdk:cnode14502", 00:14:00.265 "tgt_name": "foobar", 00:14:00.265 "method": "nvmf_create_subsystem", 00:14:00.265 "req_id": 1 00:14:00.265 } 00:14:00.265 Got JSON-RPC error response 00:14:00.265 response: 00:14:00.265 { 00:14:00.265 "code": -32603, 00:14:00.265 "message": "Unable to find target foobar" 00:14:00.265 }' 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:00.265 { 00:14:00.265 "nqn": "nqn.2016-06.io.spdk:cnode14502", 00:14:00.265 "tgt_name": "foobar", 00:14:00.265 "method": "nvmf_create_subsystem", 00:14:00.265 "req_id": 1 00:14:00.265 } 00:14:00.265 Got JSON-RPC error response 00:14:00.265 response: 00:14:00.265 { 00:14:00.265 "code": -32603, 00:14:00.265 "message": "Unable to find target foobar" 00:14:00.265 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10476 00:14:00.265 [2024-12-09 05:08:42.695580] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10476: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:00.265 { 00:14:00.265 "nqn": "nqn.2016-06.io.spdk:cnode10476", 00:14:00.265 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.265 "method": "nvmf_create_subsystem", 00:14:00.265 "req_id": 1 00:14:00.265 } 00:14:00.265 Got JSON-RPC error response 00:14:00.265 response: 00:14:00.265 { 00:14:00.265 "code": -32602, 00:14:00.265 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.265 }' 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:00.265 { 00:14:00.265 "nqn": "nqn.2016-06.io.spdk:cnode10476", 00:14:00.265 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.265 "method": "nvmf_create_subsystem", 00:14:00.265 "req_id": 1 00:14:00.265 } 00:14:00.265 Got JSON-RPC error response 00:14:00.265 response: 00:14:00.265 { 00:14:00.265 "code": -32602, 00:14:00.265 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.265 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:00.265 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25287 00:14:00.525 [2024-12-09 05:08:42.900204] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25287: invalid model number 'SPDK_Controller' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:00.525 { 00:14:00.525 "nqn": "nqn.2016-06.io.spdk:cnode25287", 00:14:00.525 "model_number": "SPDK_Controller\u001f", 00:14:00.525 "method": "nvmf_create_subsystem", 00:14:00.525 "req_id": 1 00:14:00.525 } 00:14:00.525 Got JSON-RPC error response 00:14:00.525 response: 00:14:00.525 { 00:14:00.525 "code": -32602, 00:14:00.525 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.525 }' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:00.525 { 00:14:00.525 "nqn": "nqn.2016-06.io.spdk:cnode25287", 00:14:00.525 "model_number": "SPDK_Controller\u001f", 00:14:00.525 "method": "nvmf_create_subsystem", 00:14:00.525 "req_id": 1 00:14:00.525 } 00:14:00.525 Got JSON-RPC error response 00:14:00.525 response: 00:14:00.525 { 00:14:00.525 "code": -32602, 00:14:00.525 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.525 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:00.525 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.526 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:00.785 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:00.785 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:00.785 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.785 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.785 05:08:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:00.785 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '".ecNJ:,UN5y &/6XXd!"' 00:14:00.786 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '".ecNJ:,UN5y &/6XXd!"' nqn.2016-06.io.spdk:cnode22145 00:14:01.046 [2024-12-09 05:08:43.277421] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22145: invalid serial number '".ecNJ:,UN5y &/6XXd!"' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:01.046 { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode22145", 00:14:01.046 "serial_number": "\".ecNJ:,UN5y &/6XXd!\"", 00:14:01.046 "method": "nvmf_create_subsystem", 00:14:01.046 "req_id": 1 00:14:01.046 } 00:14:01.046 Got JSON-RPC error response 00:14:01.046 response: 00:14:01.046 { 00:14:01.046 "code": -32602, 00:14:01.046 "message": "Invalid SN \".ecNJ:,UN5y &/6XXd!\"" 00:14:01.046 }' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:01.046 { 00:14:01.046 "nqn": "nqn.2016-06.io.spdk:cnode22145", 00:14:01.046 "serial_number": "\".ecNJ:,UN5y &/6XXd!\"", 00:14:01.046 "method": "nvmf_create_subsystem", 00:14:01.046 "req_id": 1 00:14:01.046 } 00:14:01.046 Got JSON-RPC error response 00:14:01.046 response: 00:14:01.046 { 00:14:01.046 "code": -32602, 00:14:01.046 "message": "Invalid SN \".ecNJ:,UN5y &/6XXd!\"" 00:14:01.046 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.046 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.047 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:01.048 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:14:01.308 05:08:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(}&Y(Zy)I,K#4A"7Y7^/Q /dev/null' 00:14:03.646 05:08:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.547 05:08:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.547 00:14:05.547 real 0m14.147s 00:14:05.547 user 0m21.784s 00:14:05.547 sys 0m6.738s 00:14:05.547 05:08:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.547 05:08:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:05.547 ************************************ 00:14:05.547 END TEST nvmf_invalid 00:14:05.547 ************************************ 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.806 ************************************ 00:14:05.806 START TEST nvmf_connect_stress 00:14:05.806 ************************************ 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.806 * Looking for test storage... 00:14:05.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:05.806 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.066 --rc genhtml_branch_coverage=1 00:14:06.066 --rc genhtml_function_coverage=1 00:14:06.066 --rc genhtml_legend=1 00:14:06.066 --rc geninfo_all_blocks=1 00:14:06.066 --rc geninfo_unexecuted_blocks=1 00:14:06.066 00:14:06.066 ' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.066 --rc genhtml_branch_coverage=1 00:14:06.066 --rc genhtml_function_coverage=1 00:14:06.066 --rc genhtml_legend=1 00:14:06.066 --rc geninfo_all_blocks=1 00:14:06.066 --rc geninfo_unexecuted_blocks=1 00:14:06.066 00:14:06.066 ' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.066 --rc genhtml_branch_coverage=1 00:14:06.066 --rc genhtml_function_coverage=1 00:14:06.066 --rc genhtml_legend=1 00:14:06.066 --rc geninfo_all_blocks=1 00:14:06.066 --rc geninfo_unexecuted_blocks=1 00:14:06.066 00:14:06.066 ' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.066 --rc genhtml_branch_coverage=1 00:14:06.066 --rc genhtml_function_coverage=1 00:14:06.066 --rc genhtml_legend=1 00:14:06.066 --rc geninfo_all_blocks=1 00:14:06.066 --rc geninfo_unexecuted_blocks=1 00:14:06.066 00:14:06.066 ' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.066 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.067 05:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.191 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:14.192 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:14.192 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:14.192 Found net devices under 0000:af:00.0: cvl_0_0 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:14.192 Found net devices under 0000:af:00.1: cvl_0_1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:14:14.192 00:14:14.192 --- 10.0.0.2 ping statistics --- 00:14:14.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.192 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:14:14.192 00:14:14.192 --- 10.0.0.1 ping statistics --- 00:14:14.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.192 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=427459 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 427459 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 427459 ']' 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.192 05:08:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.192 [2024-12-09 05:08:55.742668] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:14.192 [2024-12-09 05:08:55.742722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.192 [2024-12-09 05:08:55.841074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.192 [2024-12-09 05:08:55.882324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.192 [2024-12-09 05:08:55.882361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.192 [2024-12-09 05:08:55.882371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.192 [2024-12-09 05:08:55.882380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.192 [2024-12-09 05:08:55.882388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.193 [2024-12-09 05:08:55.883790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.193 [2024-12-09 05:08:55.883900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.193 [2024-12-09 05:08:55.883901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.193 [2024-12-09 05:08:56.635834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.193 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.451 [2024-12-09 05:08:56.660312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.451 NULL1 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=427718 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.451 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.452 05:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.711 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.711 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:14.711 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.711 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.711 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.969 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.969 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:14.969 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.969 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.969 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.538 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.538 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:15.538 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.538 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.538 05:08:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.797 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.797 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:15.797 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.797 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.797 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.056 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.057 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:16.057 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.057 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.057 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.315 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.315 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:16.315 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.315 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.315 05:08:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.574 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.574 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:16.574 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.574 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.574 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.143 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.143 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:17.143 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.143 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.143 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.402 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.402 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:17.402 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.402 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.402 05:08:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.662 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.662 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:17.662 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.662 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.662 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.921 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.921 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:17.921 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.921 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.921 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.489 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.489 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:18.489 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.489 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.489 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.749 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.749 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:18.749 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.749 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.749 05:09:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.008 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.008 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:19.008 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.008 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.008 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.267 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.267 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:19.267 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.267 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.267 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.525 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.525 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:19.525 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.525 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.525 05:09:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.093 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.093 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:20.093 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.093 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.093 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.353 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.353 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:20.353 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.353 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.353 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.613 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.613 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:20.613 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.613 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.613 05:09:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.872 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.872 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:20.872 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.872 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.872 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.130 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.130 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:21.130 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.130 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.130 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.698 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.698 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:21.698 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.698 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.698 05:09:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.957 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.957 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:21.957 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.957 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.957 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.214 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.214 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:22.214 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.214 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.214 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.471 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.471 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:22.471 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.471 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.471 05:09:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.036 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.036 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:23.036 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.036 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.036 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.293 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.293 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:23.293 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.293 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.293 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.551 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.551 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:23.551 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.551 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.551 05:09:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.809 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.809 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:23.809 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.809 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.809 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.067 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.067 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:24.067 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.067 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.067 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.634 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 427718 00:14:24.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (427718) - No such process 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 427718 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:24.634 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.635 rmmod nvme_tcp 00:14:24.635 rmmod nvme_fabrics 00:14:24.635 rmmod nvme_keyring 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 427459 ']' 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 427459 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 427459 ']' 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 427459 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427459 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427459' 00:14:24.635 killing process with pid 427459 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 427459 00:14:24.635 05:09:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 427459 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.894 05:09:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:27.436 00:14:27.436 real 0m21.199s 00:14:27.436 user 0m42.753s 00:14:27.436 sys 0m9.160s 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.436 ************************************ 00:14:27.436 END TEST nvmf_connect_stress 00:14:27.436 ************************************ 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.436 ************************************ 00:14:27.436 START TEST nvmf_fused_ordering 00:14:27.436 ************************************ 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.436 * Looking for test storage... 00:14:27.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:27.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.436 --rc genhtml_branch_coverage=1 00:14:27.436 --rc genhtml_function_coverage=1 00:14:27.436 --rc genhtml_legend=1 00:14:27.436 --rc geninfo_all_blocks=1 00:14:27.436 --rc geninfo_unexecuted_blocks=1 00:14:27.436 00:14:27.436 ' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:27.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.436 --rc genhtml_branch_coverage=1 00:14:27.436 --rc genhtml_function_coverage=1 00:14:27.436 --rc genhtml_legend=1 00:14:27.436 --rc geninfo_all_blocks=1 00:14:27.436 --rc geninfo_unexecuted_blocks=1 00:14:27.436 00:14:27.436 ' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:27.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.436 --rc genhtml_branch_coverage=1 00:14:27.436 --rc genhtml_function_coverage=1 00:14:27.436 --rc genhtml_legend=1 00:14:27.436 --rc geninfo_all_blocks=1 00:14:27.436 --rc geninfo_unexecuted_blocks=1 00:14:27.436 00:14:27.436 ' 00:14:27.436 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:27.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.436 --rc genhtml_branch_coverage=1 00:14:27.436 --rc genhtml_function_coverage=1 00:14:27.436 --rc genhtml_legend=1 00:14:27.436 --rc geninfo_all_blocks=1 00:14:27.436 --rc geninfo_unexecuted_blocks=1 00:14:27.437 00:14:27.437 ' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:27.437 05:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:34.360 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:34.360 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:34.360 Found net devices under 0000:af:00.0: cvl_0_0 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:34.360 Found net devices under 0000:af:00.1: cvl_0_1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:34.360 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.618 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.618 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:34.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:14:34.619 00:14:34.619 --- 10.0.0.2 ping statistics --- 00:14:34.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.619 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:14:34.619 00:14:34.619 --- 10.0.0.1 ping statistics --- 00:14:34.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.619 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=433864 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 433864 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 433864 ']' 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.619 05:09:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:34.619 [2024-12-09 05:09:16.995953] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:34.619 [2024-12-09 05:09:16.996000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.878 [2024-12-09 05:09:17.092729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.878 [2024-12-09 05:09:17.132019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.878 [2024-12-09 05:09:17.132059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.878 [2024-12-09 05:09:17.132068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.878 [2024-12-09 05:09:17.132077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.878 [2024-12-09 05:09:17.132084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.878 [2024-12-09 05:09:17.132678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.445 [2024-12-09 05:09:17.883040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.445 [2024-12-09 05:09:17.903250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.445 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 NULL1 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.704 05:09:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:35.704 [2024-12-09 05:09:17.962029] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:35.704 [2024-12-09 05:09:17.962066] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434024 ] 00:14:35.963 Attached to nqn.2016-06.io.spdk:cnode1 00:14:35.963 Namespace ID: 1 size: 1GB 00:14:35.963 fused_ordering(0) 00:14:35.963 fused_ordering(1) 00:14:35.963 fused_ordering(2) 00:14:35.963 fused_ordering(3) 00:14:35.963 fused_ordering(4) 00:14:35.963 fused_ordering(5) 00:14:35.963 fused_ordering(6) 00:14:35.963 fused_ordering(7) 00:14:35.963 fused_ordering(8) 00:14:35.963 fused_ordering(9) 00:14:35.963 fused_ordering(10) 00:14:35.963 fused_ordering(11) 00:14:35.963 fused_ordering(12) 00:14:35.963 fused_ordering(13) 00:14:35.963 fused_ordering(14) 00:14:35.963 fused_ordering(15) 00:14:35.963 fused_ordering(16) 00:14:35.963 fused_ordering(17) 00:14:35.963 fused_ordering(18) 00:14:35.963 fused_ordering(19) 00:14:35.963 fused_ordering(20) 00:14:35.963 fused_ordering(21) 00:14:35.963 fused_ordering(22) 00:14:35.963 fused_ordering(23) 00:14:35.963 fused_ordering(24) 00:14:35.963 fused_ordering(25) 00:14:35.963 fused_ordering(26) 00:14:35.963 fused_ordering(27) 00:14:35.963 fused_ordering(28) 00:14:35.963 fused_ordering(29) 00:14:35.963 fused_ordering(30) 00:14:35.963 fused_ordering(31) 00:14:35.963 fused_ordering(32) 00:14:35.963 fused_ordering(33) 00:14:35.963 fused_ordering(34) 00:14:35.963 fused_ordering(35) 00:14:35.963 fused_ordering(36) 00:14:35.963 fused_ordering(37) 00:14:35.963 fused_ordering(38) 00:14:35.963 fused_ordering(39) 00:14:35.963 fused_ordering(40) 00:14:35.963 fused_ordering(41) 00:14:35.963 fused_ordering(42) 00:14:35.963 fused_ordering(43) 00:14:35.963 fused_ordering(44) 00:14:35.963 fused_ordering(45) 00:14:35.963 fused_ordering(46) 00:14:35.963 fused_ordering(47) 00:14:35.963 fused_ordering(48) 00:14:35.963 fused_ordering(49) 00:14:35.963 fused_ordering(50) 00:14:35.963 fused_ordering(51) 00:14:35.963 fused_ordering(52) 00:14:35.963 fused_ordering(53) 00:14:35.963 fused_ordering(54) 00:14:35.963 fused_ordering(55) 00:14:35.963 fused_ordering(56) 00:14:35.963 fused_ordering(57) 00:14:35.963 fused_ordering(58) 00:14:35.963 fused_ordering(59) 00:14:35.963 fused_ordering(60) 00:14:35.963 fused_ordering(61) 00:14:35.963 fused_ordering(62) 00:14:35.963 fused_ordering(63) 00:14:35.963 fused_ordering(64) 00:14:35.963 fused_ordering(65) 00:14:35.963 fused_ordering(66) 00:14:35.963 fused_ordering(67) 00:14:35.963 fused_ordering(68) 00:14:35.963 fused_ordering(69) 00:14:35.963 fused_ordering(70) 00:14:35.963 fused_ordering(71) 00:14:35.963 fused_ordering(72) 00:14:35.963 fused_ordering(73) 00:14:35.963 fused_ordering(74) 00:14:35.963 fused_ordering(75) 00:14:35.963 fused_ordering(76) 00:14:35.963 fused_ordering(77) 00:14:35.963 fused_ordering(78) 00:14:35.963 fused_ordering(79) 00:14:35.963 fused_ordering(80) 00:14:35.963 fused_ordering(81) 00:14:35.963 fused_ordering(82) 00:14:35.963 fused_ordering(83) 00:14:35.963 fused_ordering(84) 00:14:35.963 fused_ordering(85) 00:14:35.963 fused_ordering(86) 00:14:35.963 fused_ordering(87) 00:14:35.963 fused_ordering(88) 00:14:35.963 fused_ordering(89) 00:14:35.963 fused_ordering(90) 00:14:35.963 fused_ordering(91) 00:14:35.963 fused_ordering(92) 00:14:35.963 fused_ordering(93) 00:14:35.963 fused_ordering(94) 00:14:35.963 fused_ordering(95) 00:14:35.963 fused_ordering(96) 00:14:35.963 fused_ordering(97) 00:14:35.963 fused_ordering(98) 00:14:35.964 fused_ordering(99) 00:14:35.964 fused_ordering(100) 00:14:35.964 fused_ordering(101) 00:14:35.964 fused_ordering(102) 00:14:35.964 fused_ordering(103) 00:14:35.964 fused_ordering(104) 00:14:35.964 fused_ordering(105) 00:14:35.964 fused_ordering(106) 00:14:35.964 fused_ordering(107) 00:14:35.964 fused_ordering(108) 00:14:35.964 fused_ordering(109) 00:14:35.964 fused_ordering(110) 00:14:35.964 fused_ordering(111) 00:14:35.964 fused_ordering(112) 00:14:35.964 fused_ordering(113) 00:14:35.964 fused_ordering(114) 00:14:35.964 fused_ordering(115) 00:14:35.964 fused_ordering(116) 00:14:35.964 fused_ordering(117) 00:14:35.964 fused_ordering(118) 00:14:35.964 fused_ordering(119) 00:14:35.964 fused_ordering(120) 00:14:35.964 fused_ordering(121) 00:14:35.964 fused_ordering(122) 00:14:35.964 fused_ordering(123) 00:14:35.964 fused_ordering(124) 00:14:35.964 fused_ordering(125) 00:14:35.964 fused_ordering(126) 00:14:35.964 fused_ordering(127) 00:14:35.964 fused_ordering(128) 00:14:35.964 fused_ordering(129) 00:14:35.964 fused_ordering(130) 00:14:35.964 fused_ordering(131) 00:14:35.964 fused_ordering(132) 00:14:35.964 fused_ordering(133) 00:14:35.964 fused_ordering(134) 00:14:35.964 fused_ordering(135) 00:14:35.964 fused_ordering(136) 00:14:35.964 fused_ordering(137) 00:14:35.964 fused_ordering(138) 00:14:35.964 fused_ordering(139) 00:14:35.964 fused_ordering(140) 00:14:35.964 fused_ordering(141) 00:14:35.964 fused_ordering(142) 00:14:35.964 fused_ordering(143) 00:14:35.964 fused_ordering(144) 00:14:35.964 fused_ordering(145) 00:14:35.964 fused_ordering(146) 00:14:35.964 fused_ordering(147) 00:14:35.964 fused_ordering(148) 00:14:35.964 fused_ordering(149) 00:14:35.964 fused_ordering(150) 00:14:35.964 fused_ordering(151) 00:14:35.964 fused_ordering(152) 00:14:35.964 fused_ordering(153) 00:14:35.964 fused_ordering(154) 00:14:35.964 fused_ordering(155) 00:14:35.964 fused_ordering(156) 00:14:35.964 fused_ordering(157) 00:14:35.964 fused_ordering(158) 00:14:35.964 fused_ordering(159) 00:14:35.964 fused_ordering(160) 00:14:35.964 fused_ordering(161) 00:14:35.964 fused_ordering(162) 00:14:35.964 fused_ordering(163) 00:14:35.964 fused_ordering(164) 00:14:35.964 fused_ordering(165) 00:14:35.964 fused_ordering(166) 00:14:35.964 fused_ordering(167) 00:14:35.964 fused_ordering(168) 00:14:35.964 fused_ordering(169) 00:14:35.964 fused_ordering(170) 00:14:35.964 fused_ordering(171) 00:14:35.964 fused_ordering(172) 00:14:35.964 fused_ordering(173) 00:14:35.964 fused_ordering(174) 00:14:35.964 fused_ordering(175) 00:14:35.964 fused_ordering(176) 00:14:35.964 fused_ordering(177) 00:14:35.964 fused_ordering(178) 00:14:35.964 fused_ordering(179) 00:14:35.964 fused_ordering(180) 00:14:35.964 fused_ordering(181) 00:14:35.964 fused_ordering(182) 00:14:35.964 fused_ordering(183) 00:14:35.964 fused_ordering(184) 00:14:35.964 fused_ordering(185) 00:14:35.964 fused_ordering(186) 00:14:35.964 fused_ordering(187) 00:14:35.964 fused_ordering(188) 00:14:35.964 fused_ordering(189) 00:14:35.964 fused_ordering(190) 00:14:35.964 fused_ordering(191) 00:14:35.964 fused_ordering(192) 00:14:35.964 fused_ordering(193) 00:14:35.964 fused_ordering(194) 00:14:35.964 fused_ordering(195) 00:14:35.964 fused_ordering(196) 00:14:35.964 fused_ordering(197) 00:14:35.964 fused_ordering(198) 00:14:35.964 fused_ordering(199) 00:14:35.964 fused_ordering(200) 00:14:35.964 fused_ordering(201) 00:14:35.964 fused_ordering(202) 00:14:35.964 fused_ordering(203) 00:14:35.964 fused_ordering(204) 00:14:35.964 fused_ordering(205) 00:14:36.223 fused_ordering(206) 00:14:36.223 fused_ordering(207) 00:14:36.223 fused_ordering(208) 00:14:36.223 fused_ordering(209) 00:14:36.223 fused_ordering(210) 00:14:36.223 fused_ordering(211) 00:14:36.223 fused_ordering(212) 00:14:36.223 fused_ordering(213) 00:14:36.223 fused_ordering(214) 00:14:36.223 fused_ordering(215) 00:14:36.223 fused_ordering(216) 00:14:36.223 fused_ordering(217) 00:14:36.223 fused_ordering(218) 00:14:36.223 fused_ordering(219) 00:14:36.223 fused_ordering(220) 00:14:36.223 fused_ordering(221) 00:14:36.223 fused_ordering(222) 00:14:36.223 fused_ordering(223) 00:14:36.223 fused_ordering(224) 00:14:36.223 fused_ordering(225) 00:14:36.223 fused_ordering(226) 00:14:36.223 fused_ordering(227) 00:14:36.223 fused_ordering(228) 00:14:36.223 fused_ordering(229) 00:14:36.223 fused_ordering(230) 00:14:36.223 fused_ordering(231) 00:14:36.223 fused_ordering(232) 00:14:36.223 fused_ordering(233) 00:14:36.223 fused_ordering(234) 00:14:36.223 fused_ordering(235) 00:14:36.223 fused_ordering(236) 00:14:36.223 fused_ordering(237) 00:14:36.223 fused_ordering(238) 00:14:36.223 fused_ordering(239) 00:14:36.223 fused_ordering(240) 00:14:36.223 fused_ordering(241) 00:14:36.223 fused_ordering(242) 00:14:36.223 fused_ordering(243) 00:14:36.223 fused_ordering(244) 00:14:36.223 fused_ordering(245) 00:14:36.223 fused_ordering(246) 00:14:36.223 fused_ordering(247) 00:14:36.223 fused_ordering(248) 00:14:36.223 fused_ordering(249) 00:14:36.223 fused_ordering(250) 00:14:36.223 fused_ordering(251) 00:14:36.223 fused_ordering(252) 00:14:36.223 fused_ordering(253) 00:14:36.223 fused_ordering(254) 00:14:36.223 fused_ordering(255) 00:14:36.223 fused_ordering(256) 00:14:36.223 fused_ordering(257) 00:14:36.223 fused_ordering(258) 00:14:36.223 fused_ordering(259) 00:14:36.223 fused_ordering(260) 00:14:36.223 fused_ordering(261) 00:14:36.223 fused_ordering(262) 00:14:36.223 fused_ordering(263) 00:14:36.223 fused_ordering(264) 00:14:36.223 fused_ordering(265) 00:14:36.223 fused_ordering(266) 00:14:36.223 fused_ordering(267) 00:14:36.223 fused_ordering(268) 00:14:36.223 fused_ordering(269) 00:14:36.223 fused_ordering(270) 00:14:36.223 fused_ordering(271) 00:14:36.223 fused_ordering(272) 00:14:36.223 fused_ordering(273) 00:14:36.223 fused_ordering(274) 00:14:36.223 fused_ordering(275) 00:14:36.223 fused_ordering(276) 00:14:36.223 fused_ordering(277) 00:14:36.223 fused_ordering(278) 00:14:36.223 fused_ordering(279) 00:14:36.223 fused_ordering(280) 00:14:36.223 fused_ordering(281) 00:14:36.223 fused_ordering(282) 00:14:36.223 fused_ordering(283) 00:14:36.223 fused_ordering(284) 00:14:36.223 fused_ordering(285) 00:14:36.223 fused_ordering(286) 00:14:36.223 fused_ordering(287) 00:14:36.223 fused_ordering(288) 00:14:36.223 fused_ordering(289) 00:14:36.223 fused_ordering(290) 00:14:36.223 fused_ordering(291) 00:14:36.223 fused_ordering(292) 00:14:36.223 fused_ordering(293) 00:14:36.223 fused_ordering(294) 00:14:36.223 fused_ordering(295) 00:14:36.223 fused_ordering(296) 00:14:36.223 fused_ordering(297) 00:14:36.223 fused_ordering(298) 00:14:36.223 fused_ordering(299) 00:14:36.223 fused_ordering(300) 00:14:36.223 fused_ordering(301) 00:14:36.223 fused_ordering(302) 00:14:36.223 fused_ordering(303) 00:14:36.223 fused_ordering(304) 00:14:36.223 fused_ordering(305) 00:14:36.224 fused_ordering(306) 00:14:36.224 fused_ordering(307) 00:14:36.224 fused_ordering(308) 00:14:36.224 fused_ordering(309) 00:14:36.224 fused_ordering(310) 00:14:36.224 fused_ordering(311) 00:14:36.224 fused_ordering(312) 00:14:36.224 fused_ordering(313) 00:14:36.224 fused_ordering(314) 00:14:36.224 fused_ordering(315) 00:14:36.224 fused_ordering(316) 00:14:36.224 fused_ordering(317) 00:14:36.224 fused_ordering(318) 00:14:36.224 fused_ordering(319) 00:14:36.224 fused_ordering(320) 00:14:36.224 fused_ordering(321) 00:14:36.224 fused_ordering(322) 00:14:36.224 fused_ordering(323) 00:14:36.224 fused_ordering(324) 00:14:36.224 fused_ordering(325) 00:14:36.224 fused_ordering(326) 00:14:36.224 fused_ordering(327) 00:14:36.224 fused_ordering(328) 00:14:36.224 fused_ordering(329) 00:14:36.224 fused_ordering(330) 00:14:36.224 fused_ordering(331) 00:14:36.224 fused_ordering(332) 00:14:36.224 fused_ordering(333) 00:14:36.224 fused_ordering(334) 00:14:36.224 fused_ordering(335) 00:14:36.224 fused_ordering(336) 00:14:36.224 fused_ordering(337) 00:14:36.224 fused_ordering(338) 00:14:36.224 fused_ordering(339) 00:14:36.224 fused_ordering(340) 00:14:36.224 fused_ordering(341) 00:14:36.224 fused_ordering(342) 00:14:36.224 fused_ordering(343) 00:14:36.224 fused_ordering(344) 00:14:36.224 fused_ordering(345) 00:14:36.224 fused_ordering(346) 00:14:36.224 fused_ordering(347) 00:14:36.224 fused_ordering(348) 00:14:36.224 fused_ordering(349) 00:14:36.224 fused_ordering(350) 00:14:36.224 fused_ordering(351) 00:14:36.224 fused_ordering(352) 00:14:36.224 fused_ordering(353) 00:14:36.224 fused_ordering(354) 00:14:36.224 fused_ordering(355) 00:14:36.224 fused_ordering(356) 00:14:36.224 fused_ordering(357) 00:14:36.224 fused_ordering(358) 00:14:36.224 fused_ordering(359) 00:14:36.224 fused_ordering(360) 00:14:36.224 fused_ordering(361) 00:14:36.224 fused_ordering(362) 00:14:36.224 fused_ordering(363) 00:14:36.224 fused_ordering(364) 00:14:36.224 fused_ordering(365) 00:14:36.224 fused_ordering(366) 00:14:36.224 fused_ordering(367) 00:14:36.224 fused_ordering(368) 00:14:36.224 fused_ordering(369) 00:14:36.224 fused_ordering(370) 00:14:36.224 fused_ordering(371) 00:14:36.224 fused_ordering(372) 00:14:36.224 fused_ordering(373) 00:14:36.224 fused_ordering(374) 00:14:36.224 fused_ordering(375) 00:14:36.224 fused_ordering(376) 00:14:36.224 fused_ordering(377) 00:14:36.224 fused_ordering(378) 00:14:36.224 fused_ordering(379) 00:14:36.224 fused_ordering(380) 00:14:36.224 fused_ordering(381) 00:14:36.224 fused_ordering(382) 00:14:36.224 fused_ordering(383) 00:14:36.224 fused_ordering(384) 00:14:36.224 fused_ordering(385) 00:14:36.224 fused_ordering(386) 00:14:36.224 fused_ordering(387) 00:14:36.224 fused_ordering(388) 00:14:36.224 fused_ordering(389) 00:14:36.224 fused_ordering(390) 00:14:36.224 fused_ordering(391) 00:14:36.224 fused_ordering(392) 00:14:36.224 fused_ordering(393) 00:14:36.224 fused_ordering(394) 00:14:36.224 fused_ordering(395) 00:14:36.224 fused_ordering(396) 00:14:36.224 fused_ordering(397) 00:14:36.224 fused_ordering(398) 00:14:36.224 fused_ordering(399) 00:14:36.224 fused_ordering(400) 00:14:36.224 fused_ordering(401) 00:14:36.224 fused_ordering(402) 00:14:36.224 fused_ordering(403) 00:14:36.224 fused_ordering(404) 00:14:36.224 fused_ordering(405) 00:14:36.224 fused_ordering(406) 00:14:36.224 fused_ordering(407) 00:14:36.224 fused_ordering(408) 00:14:36.224 fused_ordering(409) 00:14:36.224 fused_ordering(410) 00:14:36.483 fused_ordering(411) 00:14:36.483 fused_ordering(412) 00:14:36.483 fused_ordering(413) 00:14:36.483 fused_ordering(414) 00:14:36.483 fused_ordering(415) 00:14:36.483 fused_ordering(416) 00:14:36.483 fused_ordering(417) 00:14:36.483 fused_ordering(418) 00:14:36.483 fused_ordering(419) 00:14:36.483 fused_ordering(420) 00:14:36.483 fused_ordering(421) 00:14:36.483 fused_ordering(422) 00:14:36.483 fused_ordering(423) 00:14:36.483 fused_ordering(424) 00:14:36.483 fused_ordering(425) 00:14:36.483 fused_ordering(426) 00:14:36.483 fused_ordering(427) 00:14:36.483 fused_ordering(428) 00:14:36.483 fused_ordering(429) 00:14:36.483 fused_ordering(430) 00:14:36.483 fused_ordering(431) 00:14:36.483 fused_ordering(432) 00:14:36.483 fused_ordering(433) 00:14:36.483 fused_ordering(434) 00:14:36.483 fused_ordering(435) 00:14:36.483 fused_ordering(436) 00:14:36.483 fused_ordering(437) 00:14:36.483 fused_ordering(438) 00:14:36.483 fused_ordering(439) 00:14:36.483 fused_ordering(440) 00:14:36.483 fused_ordering(441) 00:14:36.483 fused_ordering(442) 00:14:36.483 fused_ordering(443) 00:14:36.483 fused_ordering(444) 00:14:36.483 fused_ordering(445) 00:14:36.483 fused_ordering(446) 00:14:36.483 fused_ordering(447) 00:14:36.483 fused_ordering(448) 00:14:36.483 fused_ordering(449) 00:14:36.483 fused_ordering(450) 00:14:36.483 fused_ordering(451) 00:14:36.483 fused_ordering(452) 00:14:36.483 fused_ordering(453) 00:14:36.483 fused_ordering(454) 00:14:36.483 fused_ordering(455) 00:14:36.483 fused_ordering(456) 00:14:36.483 fused_ordering(457) 00:14:36.483 fused_ordering(458) 00:14:36.484 fused_ordering(459) 00:14:36.484 fused_ordering(460) 00:14:36.484 fused_ordering(461) 00:14:36.484 fused_ordering(462) 00:14:36.484 fused_ordering(463) 00:14:36.484 fused_ordering(464) 00:14:36.484 fused_ordering(465) 00:14:36.484 fused_ordering(466) 00:14:36.484 fused_ordering(467) 00:14:36.484 fused_ordering(468) 00:14:36.484 fused_ordering(469) 00:14:36.484 fused_ordering(470) 00:14:36.484 fused_ordering(471) 00:14:36.484 fused_ordering(472) 00:14:36.484 fused_ordering(473) 00:14:36.484 fused_ordering(474) 00:14:36.484 fused_ordering(475) 00:14:36.484 fused_ordering(476) 00:14:36.484 fused_ordering(477) 00:14:36.484 fused_ordering(478) 00:14:36.484 fused_ordering(479) 00:14:36.484 fused_ordering(480) 00:14:36.484 fused_ordering(481) 00:14:36.484 fused_ordering(482) 00:14:36.484 fused_ordering(483) 00:14:36.484 fused_ordering(484) 00:14:36.484 fused_ordering(485) 00:14:36.484 fused_ordering(486) 00:14:36.484 fused_ordering(487) 00:14:36.484 fused_ordering(488) 00:14:36.484 fused_ordering(489) 00:14:36.484 fused_ordering(490) 00:14:36.484 fused_ordering(491) 00:14:36.484 fused_ordering(492) 00:14:36.484 fused_ordering(493) 00:14:36.484 fused_ordering(494) 00:14:36.484 fused_ordering(495) 00:14:36.484 fused_ordering(496) 00:14:36.484 fused_ordering(497) 00:14:36.484 fused_ordering(498) 00:14:36.484 fused_ordering(499) 00:14:36.484 fused_ordering(500) 00:14:36.484 fused_ordering(501) 00:14:36.484 fused_ordering(502) 00:14:36.484 fused_ordering(503) 00:14:36.484 fused_ordering(504) 00:14:36.484 fused_ordering(505) 00:14:36.484 fused_ordering(506) 00:14:36.484 fused_ordering(507) 00:14:36.484 fused_ordering(508) 00:14:36.484 fused_ordering(509) 00:14:36.484 fused_ordering(510) 00:14:36.484 fused_ordering(511) 00:14:36.484 fused_ordering(512) 00:14:36.484 fused_ordering(513) 00:14:36.484 fused_ordering(514) 00:14:36.484 fused_ordering(515) 00:14:36.484 fused_ordering(516) 00:14:36.484 fused_ordering(517) 00:14:36.484 fused_ordering(518) 00:14:36.484 fused_ordering(519) 00:14:36.484 fused_ordering(520) 00:14:36.484 fused_ordering(521) 00:14:36.484 fused_ordering(522) 00:14:36.484 fused_ordering(523) 00:14:36.484 fused_ordering(524) 00:14:36.484 fused_ordering(525) 00:14:36.484 fused_ordering(526) 00:14:36.484 fused_ordering(527) 00:14:36.484 fused_ordering(528) 00:14:36.484 fused_ordering(529) 00:14:36.484 fused_ordering(530) 00:14:36.484 fused_ordering(531) 00:14:36.484 fused_ordering(532) 00:14:36.484 fused_ordering(533) 00:14:36.484 fused_ordering(534) 00:14:36.484 fused_ordering(535) 00:14:36.484 fused_ordering(536) 00:14:36.484 fused_ordering(537) 00:14:36.484 fused_ordering(538) 00:14:36.484 fused_ordering(539) 00:14:36.484 fused_ordering(540) 00:14:36.484 fused_ordering(541) 00:14:36.484 fused_ordering(542) 00:14:36.484 fused_ordering(543) 00:14:36.484 fused_ordering(544) 00:14:36.484 fused_ordering(545) 00:14:36.484 fused_ordering(546) 00:14:36.484 fused_ordering(547) 00:14:36.484 fused_ordering(548) 00:14:36.484 fused_ordering(549) 00:14:36.484 fused_ordering(550) 00:14:36.484 fused_ordering(551) 00:14:36.484 fused_ordering(552) 00:14:36.484 fused_ordering(553) 00:14:36.484 fused_ordering(554) 00:14:36.484 fused_ordering(555) 00:14:36.484 fused_ordering(556) 00:14:36.484 fused_ordering(557) 00:14:36.484 fused_ordering(558) 00:14:36.484 fused_ordering(559) 00:14:36.484 fused_ordering(560) 00:14:36.484 fused_ordering(561) 00:14:36.484 fused_ordering(562) 00:14:36.484 fused_ordering(563) 00:14:36.484 fused_ordering(564) 00:14:36.484 fused_ordering(565) 00:14:36.484 fused_ordering(566) 00:14:36.484 fused_ordering(567) 00:14:36.484 fused_ordering(568) 00:14:36.484 fused_ordering(569) 00:14:36.484 fused_ordering(570) 00:14:36.484 fused_ordering(571) 00:14:36.484 fused_ordering(572) 00:14:36.484 fused_ordering(573) 00:14:36.484 fused_ordering(574) 00:14:36.484 fused_ordering(575) 00:14:36.484 fused_ordering(576) 00:14:36.484 fused_ordering(577) 00:14:36.484 fused_ordering(578) 00:14:36.484 fused_ordering(579) 00:14:36.484 fused_ordering(580) 00:14:36.484 fused_ordering(581) 00:14:36.484 fused_ordering(582) 00:14:36.484 fused_ordering(583) 00:14:36.484 fused_ordering(584) 00:14:36.484 fused_ordering(585) 00:14:36.484 fused_ordering(586) 00:14:36.484 fused_ordering(587) 00:14:36.484 fused_ordering(588) 00:14:36.484 fused_ordering(589) 00:14:36.484 fused_ordering(590) 00:14:36.484 fused_ordering(591) 00:14:36.484 fused_ordering(592) 00:14:36.484 fused_ordering(593) 00:14:36.484 fused_ordering(594) 00:14:36.484 fused_ordering(595) 00:14:36.484 fused_ordering(596) 00:14:36.484 fused_ordering(597) 00:14:36.484 fused_ordering(598) 00:14:36.484 fused_ordering(599) 00:14:36.484 fused_ordering(600) 00:14:36.484 fused_ordering(601) 00:14:36.484 fused_ordering(602) 00:14:36.484 fused_ordering(603) 00:14:36.484 fused_ordering(604) 00:14:36.484 fused_ordering(605) 00:14:36.484 fused_ordering(606) 00:14:36.484 fused_ordering(607) 00:14:36.484 fused_ordering(608) 00:14:36.484 fused_ordering(609) 00:14:36.484 fused_ordering(610) 00:14:36.484 fused_ordering(611) 00:14:36.484 fused_ordering(612) 00:14:36.484 fused_ordering(613) 00:14:36.484 fused_ordering(614) 00:14:36.484 fused_ordering(615) 00:14:37.053 fused_ordering(616) 00:14:37.053 fused_ordering(617) 00:14:37.053 fused_ordering(618) 00:14:37.053 fused_ordering(619) 00:14:37.053 fused_ordering(620) 00:14:37.053 fused_ordering(621) 00:14:37.053 fused_ordering(622) 00:14:37.053 fused_ordering(623) 00:14:37.053 fused_ordering(624) 00:14:37.053 fused_ordering(625) 00:14:37.053 fused_ordering(626) 00:14:37.053 fused_ordering(627) 00:14:37.053 fused_ordering(628) 00:14:37.053 fused_ordering(629) 00:14:37.053 fused_ordering(630) 00:14:37.053 fused_ordering(631) 00:14:37.053 fused_ordering(632) 00:14:37.053 fused_ordering(633) 00:14:37.053 fused_ordering(634) 00:14:37.053 fused_ordering(635) 00:14:37.053 fused_ordering(636) 00:14:37.053 fused_ordering(637) 00:14:37.053 fused_ordering(638) 00:14:37.053 fused_ordering(639) 00:14:37.053 fused_ordering(640) 00:14:37.053 fused_ordering(641) 00:14:37.053 fused_ordering(642) 00:14:37.053 fused_ordering(643) 00:14:37.053 fused_ordering(644) 00:14:37.053 fused_ordering(645) 00:14:37.053 fused_ordering(646) 00:14:37.053 fused_ordering(647) 00:14:37.053 fused_ordering(648) 00:14:37.053 fused_ordering(649) 00:14:37.053 fused_ordering(650) 00:14:37.053 fused_ordering(651) 00:14:37.053 fused_ordering(652) 00:14:37.053 fused_ordering(653) 00:14:37.053 fused_ordering(654) 00:14:37.053 fused_ordering(655) 00:14:37.053 fused_ordering(656) 00:14:37.053 fused_ordering(657) 00:14:37.053 fused_ordering(658) 00:14:37.053 fused_ordering(659) 00:14:37.053 fused_ordering(660) 00:14:37.053 fused_ordering(661) 00:14:37.053 fused_ordering(662) 00:14:37.053 fused_ordering(663) 00:14:37.053 fused_ordering(664) 00:14:37.053 fused_ordering(665) 00:14:37.053 fused_ordering(666) 00:14:37.053 fused_ordering(667) 00:14:37.053 fused_ordering(668) 00:14:37.053 fused_ordering(669) 00:14:37.053 fused_ordering(670) 00:14:37.053 fused_ordering(671) 00:14:37.053 fused_ordering(672) 00:14:37.053 fused_ordering(673) 00:14:37.053 fused_ordering(674) 00:14:37.053 fused_ordering(675) 00:14:37.053 fused_ordering(676) 00:14:37.053 fused_ordering(677) 00:14:37.053 fused_ordering(678) 00:14:37.053 fused_ordering(679) 00:14:37.053 fused_ordering(680) 00:14:37.053 fused_ordering(681) 00:14:37.053 fused_ordering(682) 00:14:37.053 fused_ordering(683) 00:14:37.053 fused_ordering(684) 00:14:37.053 fused_ordering(685) 00:14:37.053 fused_ordering(686) 00:14:37.053 fused_ordering(687) 00:14:37.053 fused_ordering(688) 00:14:37.053 fused_ordering(689) 00:14:37.053 fused_ordering(690) 00:14:37.053 fused_ordering(691) 00:14:37.053 fused_ordering(692) 00:14:37.053 fused_ordering(693) 00:14:37.053 fused_ordering(694) 00:14:37.053 fused_ordering(695) 00:14:37.053 fused_ordering(696) 00:14:37.053 fused_ordering(697) 00:14:37.053 fused_ordering(698) 00:14:37.053 fused_ordering(699) 00:14:37.053 fused_ordering(700) 00:14:37.053 fused_ordering(701) 00:14:37.053 fused_ordering(702) 00:14:37.053 fused_ordering(703) 00:14:37.053 fused_ordering(704) 00:14:37.053 fused_ordering(705) 00:14:37.053 fused_ordering(706) 00:14:37.053 fused_ordering(707) 00:14:37.053 fused_ordering(708) 00:14:37.053 fused_ordering(709) 00:14:37.053 fused_ordering(710) 00:14:37.053 fused_ordering(711) 00:14:37.053 fused_ordering(712) 00:14:37.053 fused_ordering(713) 00:14:37.053 fused_ordering(714) 00:14:37.053 fused_ordering(715) 00:14:37.053 fused_ordering(716) 00:14:37.053 fused_ordering(717) 00:14:37.053 fused_ordering(718) 00:14:37.053 fused_ordering(719) 00:14:37.053 fused_ordering(720) 00:14:37.053 fused_ordering(721) 00:14:37.053 fused_ordering(722) 00:14:37.053 fused_ordering(723) 00:14:37.053 fused_ordering(724) 00:14:37.053 fused_ordering(725) 00:14:37.053 fused_ordering(726) 00:14:37.053 fused_ordering(727) 00:14:37.053 fused_ordering(728) 00:14:37.053 fused_ordering(729) 00:14:37.053 fused_ordering(730) 00:14:37.053 fused_ordering(731) 00:14:37.053 fused_ordering(732) 00:14:37.053 fused_ordering(733) 00:14:37.053 fused_ordering(734) 00:14:37.053 fused_ordering(735) 00:14:37.053 fused_ordering(736) 00:14:37.053 fused_ordering(737) 00:14:37.053 fused_ordering(738) 00:14:37.053 fused_ordering(739) 00:14:37.053 fused_ordering(740) 00:14:37.053 fused_ordering(741) 00:14:37.053 fused_ordering(742) 00:14:37.053 fused_ordering(743) 00:14:37.053 fused_ordering(744) 00:14:37.053 fused_ordering(745) 00:14:37.053 fused_ordering(746) 00:14:37.053 fused_ordering(747) 00:14:37.053 fused_ordering(748) 00:14:37.053 fused_ordering(749) 00:14:37.053 fused_ordering(750) 00:14:37.053 fused_ordering(751) 00:14:37.053 fused_ordering(752) 00:14:37.053 fused_ordering(753) 00:14:37.053 fused_ordering(754) 00:14:37.053 fused_ordering(755) 00:14:37.053 fused_ordering(756) 00:14:37.053 fused_ordering(757) 00:14:37.053 fused_ordering(758) 00:14:37.053 fused_ordering(759) 00:14:37.053 fused_ordering(760) 00:14:37.053 fused_ordering(761) 00:14:37.053 fused_ordering(762) 00:14:37.053 fused_ordering(763) 00:14:37.053 fused_ordering(764) 00:14:37.053 fused_ordering(765) 00:14:37.053 fused_ordering(766) 00:14:37.053 fused_ordering(767) 00:14:37.053 fused_ordering(768) 00:14:37.053 fused_ordering(769) 00:14:37.053 fused_ordering(770) 00:14:37.053 fused_ordering(771) 00:14:37.053 fused_ordering(772) 00:14:37.053 fused_ordering(773) 00:14:37.053 fused_ordering(774) 00:14:37.053 fused_ordering(775) 00:14:37.053 fused_ordering(776) 00:14:37.053 fused_ordering(777) 00:14:37.053 fused_ordering(778) 00:14:37.053 fused_ordering(779) 00:14:37.053 fused_ordering(780) 00:14:37.053 fused_ordering(781) 00:14:37.053 fused_ordering(782) 00:14:37.053 fused_ordering(783) 00:14:37.053 fused_ordering(784) 00:14:37.053 fused_ordering(785) 00:14:37.053 fused_ordering(786) 00:14:37.053 fused_ordering(787) 00:14:37.053 fused_ordering(788) 00:14:37.053 fused_ordering(789) 00:14:37.053 fused_ordering(790) 00:14:37.053 fused_ordering(791) 00:14:37.053 fused_ordering(792) 00:14:37.053 fused_ordering(793) 00:14:37.053 fused_ordering(794) 00:14:37.053 fused_ordering(795) 00:14:37.053 fused_ordering(796) 00:14:37.053 fused_ordering(797) 00:14:37.053 fused_ordering(798) 00:14:37.053 fused_ordering(799) 00:14:37.053 fused_ordering(800) 00:14:37.053 fused_ordering(801) 00:14:37.053 fused_ordering(802) 00:14:37.053 fused_ordering(803) 00:14:37.053 fused_ordering(804) 00:14:37.053 fused_ordering(805) 00:14:37.053 fused_ordering(806) 00:14:37.053 fused_ordering(807) 00:14:37.053 fused_ordering(808) 00:14:37.053 fused_ordering(809) 00:14:37.053 fused_ordering(810) 00:14:37.053 fused_ordering(811) 00:14:37.053 fused_ordering(812) 00:14:37.053 fused_ordering(813) 00:14:37.053 fused_ordering(814) 00:14:37.053 fused_ordering(815) 00:14:37.053 fused_ordering(816) 00:14:37.053 fused_ordering(817) 00:14:37.053 fused_ordering(818) 00:14:37.053 fused_ordering(819) 00:14:37.053 fused_ordering(820) 00:14:37.313 fused_ordering(821) 00:14:37.313 fused_ordering(822) 00:14:37.313 fused_ordering(823) 00:14:37.313 fused_ordering(824) 00:14:37.313 fused_ordering(825) 00:14:37.313 fused_ordering(826) 00:14:37.313 fused_ordering(827) 00:14:37.313 fused_ordering(828) 00:14:37.313 fused_ordering(829) 00:14:37.313 fused_ordering(830) 00:14:37.313 fused_ordering(831) 00:14:37.313 fused_ordering(832) 00:14:37.313 fused_ordering(833) 00:14:37.313 fused_ordering(834) 00:14:37.313 fused_ordering(835) 00:14:37.313 fused_ordering(836) 00:14:37.313 fused_ordering(837) 00:14:37.313 fused_ordering(838) 00:14:37.313 fused_ordering(839) 00:14:37.313 fused_ordering(840) 00:14:37.313 fused_ordering(841) 00:14:37.313 fused_ordering(842) 00:14:37.313 fused_ordering(843) 00:14:37.313 fused_ordering(844) 00:14:37.313 fused_ordering(845) 00:14:37.313 fused_ordering(846) 00:14:37.313 fused_ordering(847) 00:14:37.313 fused_ordering(848) 00:14:37.313 fused_ordering(849) 00:14:37.313 fused_ordering(850) 00:14:37.313 fused_ordering(851) 00:14:37.313 fused_ordering(852) 00:14:37.313 fused_ordering(853) 00:14:37.313 fused_ordering(854) 00:14:37.313 fused_ordering(855) 00:14:37.313 fused_ordering(856) 00:14:37.313 fused_ordering(857) 00:14:37.313 fused_ordering(858) 00:14:37.313 fused_ordering(859) 00:14:37.313 fused_ordering(860) 00:14:37.313 fused_ordering(861) 00:14:37.313 fused_ordering(862) 00:14:37.313 fused_ordering(863) 00:14:37.313 fused_ordering(864) 00:14:37.313 fused_ordering(865) 00:14:37.313 fused_ordering(866) 00:14:37.313 fused_ordering(867) 00:14:37.313 fused_ordering(868) 00:14:37.313 fused_ordering(869) 00:14:37.313 fused_ordering(870) 00:14:37.313 fused_ordering(871) 00:14:37.313 fused_ordering(872) 00:14:37.313 fused_ordering(873) 00:14:37.313 fused_ordering(874) 00:14:37.313 fused_ordering(875) 00:14:37.313 fused_ordering(876) 00:14:37.313 fused_ordering(877) 00:14:37.313 fused_ordering(878) 00:14:37.313 fused_ordering(879) 00:14:37.313 fused_ordering(880) 00:14:37.313 fused_ordering(881) 00:14:37.313 fused_ordering(882) 00:14:37.313 fused_ordering(883) 00:14:37.313 fused_ordering(884) 00:14:37.313 fused_ordering(885) 00:14:37.313 fused_ordering(886) 00:14:37.313 fused_ordering(887) 00:14:37.313 fused_ordering(888) 00:14:37.313 fused_ordering(889) 00:14:37.313 fused_ordering(890) 00:14:37.313 fused_ordering(891) 00:14:37.313 fused_ordering(892) 00:14:37.313 fused_ordering(893) 00:14:37.313 fused_ordering(894) 00:14:37.313 fused_ordering(895) 00:14:37.313 fused_ordering(896) 00:14:37.313 fused_ordering(897) 00:14:37.313 fused_ordering(898) 00:14:37.313 fused_ordering(899) 00:14:37.313 fused_ordering(900) 00:14:37.313 fused_ordering(901) 00:14:37.313 fused_ordering(902) 00:14:37.313 fused_ordering(903) 00:14:37.313 fused_ordering(904) 00:14:37.313 fused_ordering(905) 00:14:37.313 fused_ordering(906) 00:14:37.313 fused_ordering(907) 00:14:37.313 fused_ordering(908) 00:14:37.313 fused_ordering(909) 00:14:37.313 fused_ordering(910) 00:14:37.313 fused_ordering(911) 00:14:37.313 fused_ordering(912) 00:14:37.313 fused_ordering(913) 00:14:37.313 fused_ordering(914) 00:14:37.313 fused_ordering(915) 00:14:37.313 fused_ordering(916) 00:14:37.314 fused_ordering(917) 00:14:37.314 fused_ordering(918) 00:14:37.314 fused_ordering(919) 00:14:37.314 fused_ordering(920) 00:14:37.314 fused_ordering(921) 00:14:37.314 fused_ordering(922) 00:14:37.314 fused_ordering(923) 00:14:37.314 fused_ordering(924) 00:14:37.314 fused_ordering(925) 00:14:37.314 fused_ordering(926) 00:14:37.314 fused_ordering(927) 00:14:37.314 fused_ordering(928) 00:14:37.314 fused_ordering(929) 00:14:37.314 fused_ordering(930) 00:14:37.314 fused_ordering(931) 00:14:37.314 fused_ordering(932) 00:14:37.314 fused_ordering(933) 00:14:37.314 fused_ordering(934) 00:14:37.314 fused_ordering(935) 00:14:37.314 fused_ordering(936) 00:14:37.314 fused_ordering(937) 00:14:37.314 fused_ordering(938) 00:14:37.314 fused_ordering(939) 00:14:37.314 fused_ordering(940) 00:14:37.314 fused_ordering(941) 00:14:37.314 fused_ordering(942) 00:14:37.314 fused_ordering(943) 00:14:37.314 fused_ordering(944) 00:14:37.314 fused_ordering(945) 00:14:37.314 fused_ordering(946) 00:14:37.314 fused_ordering(947) 00:14:37.314 fused_ordering(948) 00:14:37.314 fused_ordering(949) 00:14:37.314 fused_ordering(950) 00:14:37.314 fused_ordering(951) 00:14:37.314 fused_ordering(952) 00:14:37.314 fused_ordering(953) 00:14:37.314 fused_ordering(954) 00:14:37.314 fused_ordering(955) 00:14:37.314 fused_ordering(956) 00:14:37.314 fused_ordering(957) 00:14:37.314 fused_ordering(958) 00:14:37.314 fused_ordering(959) 00:14:37.314 fused_ordering(960) 00:14:37.314 fused_ordering(961) 00:14:37.314 fused_ordering(962) 00:14:37.314 fused_ordering(963) 00:14:37.314 fused_ordering(964) 00:14:37.314 fused_ordering(965) 00:14:37.314 fused_ordering(966) 00:14:37.314 fused_ordering(967) 00:14:37.314 fused_ordering(968) 00:14:37.314 fused_ordering(969) 00:14:37.314 fused_ordering(970) 00:14:37.314 fused_ordering(971) 00:14:37.314 fused_ordering(972) 00:14:37.314 fused_ordering(973) 00:14:37.314 fused_ordering(974) 00:14:37.314 fused_ordering(975) 00:14:37.314 fused_ordering(976) 00:14:37.314 fused_ordering(977) 00:14:37.314 fused_ordering(978) 00:14:37.314 fused_ordering(979) 00:14:37.314 fused_ordering(980) 00:14:37.314 fused_ordering(981) 00:14:37.314 fused_ordering(982) 00:14:37.314 fused_ordering(983) 00:14:37.314 fused_ordering(984) 00:14:37.314 fused_ordering(985) 00:14:37.314 fused_ordering(986) 00:14:37.314 fused_ordering(987) 00:14:37.314 fused_ordering(988) 00:14:37.314 fused_ordering(989) 00:14:37.314 fused_ordering(990) 00:14:37.314 fused_ordering(991) 00:14:37.314 fused_ordering(992) 00:14:37.314 fused_ordering(993) 00:14:37.314 fused_ordering(994) 00:14:37.314 fused_ordering(995) 00:14:37.314 fused_ordering(996) 00:14:37.314 fused_ordering(997) 00:14:37.314 fused_ordering(998) 00:14:37.314 fused_ordering(999) 00:14:37.314 fused_ordering(1000) 00:14:37.314 fused_ordering(1001) 00:14:37.314 fused_ordering(1002) 00:14:37.314 fused_ordering(1003) 00:14:37.314 fused_ordering(1004) 00:14:37.314 fused_ordering(1005) 00:14:37.314 fused_ordering(1006) 00:14:37.314 fused_ordering(1007) 00:14:37.314 fused_ordering(1008) 00:14:37.314 fused_ordering(1009) 00:14:37.314 fused_ordering(1010) 00:14:37.314 fused_ordering(1011) 00:14:37.314 fused_ordering(1012) 00:14:37.314 fused_ordering(1013) 00:14:37.314 fused_ordering(1014) 00:14:37.314 fused_ordering(1015) 00:14:37.314 fused_ordering(1016) 00:14:37.314 fused_ordering(1017) 00:14:37.314 fused_ordering(1018) 00:14:37.314 fused_ordering(1019) 00:14:37.314 fused_ordering(1020) 00:14:37.314 fused_ordering(1021) 00:14:37.314 fused_ordering(1022) 00:14:37.314 fused_ordering(1023) 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.314 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.314 rmmod nvme_tcp 00:14:37.314 rmmod nvme_fabrics 00:14:37.314 rmmod nvme_keyring 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 433864 ']' 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 433864 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 433864 ']' 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 433864 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433864 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433864' 00:14:37.574 killing process with pid 433864 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 433864 00:14:37.574 05:09:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 433864 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.835 05:09:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:39.744 00:14:39.744 real 0m12.793s 00:14:39.744 user 0m6.410s 00:14:39.744 sys 0m6.788s 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:39.744 ************************************ 00:14:39.744 END TEST nvmf_fused_ordering 00:14:39.744 ************************************ 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.744 05:09:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.004 ************************************ 00:14:40.004 START TEST nvmf_ns_masking 00:14:40.004 ************************************ 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:40.004 * Looking for test storage... 00:14:40.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.004 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.005 --rc genhtml_branch_coverage=1 00:14:40.005 --rc genhtml_function_coverage=1 00:14:40.005 --rc genhtml_legend=1 00:14:40.005 --rc geninfo_all_blocks=1 00:14:40.005 --rc geninfo_unexecuted_blocks=1 00:14:40.005 00:14:40.005 ' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.005 --rc genhtml_branch_coverage=1 00:14:40.005 --rc genhtml_function_coverage=1 00:14:40.005 --rc genhtml_legend=1 00:14:40.005 --rc geninfo_all_blocks=1 00:14:40.005 --rc geninfo_unexecuted_blocks=1 00:14:40.005 00:14:40.005 ' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.005 --rc genhtml_branch_coverage=1 00:14:40.005 --rc genhtml_function_coverage=1 00:14:40.005 --rc genhtml_legend=1 00:14:40.005 --rc geninfo_all_blocks=1 00:14:40.005 --rc geninfo_unexecuted_blocks=1 00:14:40.005 00:14:40.005 ' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.005 --rc genhtml_branch_coverage=1 00:14:40.005 --rc genhtml_function_coverage=1 00:14:40.005 --rc genhtml_legend=1 00:14:40.005 --rc geninfo_all_blocks=1 00:14:40.005 --rc geninfo_unexecuted_blocks=1 00:14:40.005 00:14:40.005 ' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:40.005 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c40a0ada-b714-4a1d-9fdd-53ead55656ec 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c86a9f69-b7d2-40c0-8842-64816eca3a49 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bc78289e-85d7-402d-aa1f-1464c8c47873 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.265 05:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:48.388 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:48.389 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:48.389 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:48.389 Found net devices under 0000:af:00.0: cvl_0_0 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:48.389 Found net devices under 0000:af:00.1: cvl_0_1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:48.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:14:48.389 00:14:48.389 --- 10.0.0.2 ping statistics --- 00:14:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.389 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:14:48.389 00:14:48.389 --- 10.0.0.1 ping statistics --- 00:14:48.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.389 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:48.389 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=438124 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 438124 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 438124 ']' 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.390 05:09:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.390 [2024-12-09 05:09:29.881234] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:48.390 [2024-12-09 05:09:29.881291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.390 [2024-12-09 05:09:29.978360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.390 [2024-12-09 05:09:30.022147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.390 [2024-12-09 05:09:30.022185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.390 [2024-12-09 05:09:30.022199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.390 [2024-12-09 05:09:30.022219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.390 [2024-12-09 05:09:30.022232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.390 [2024-12-09 05:09:30.022761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.390 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.650 [2024-12-09 05:09:30.927789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.650 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:48.650 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:48.650 05:09:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.910 Malloc1 00:14:48.910 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:48.910 Malloc2 00:14:48.910 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:49.170 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:49.430 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.430 [2024-12-09 05:09:31.863723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.430 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:49.430 05:09:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc78289e-85d7-402d-aa1f-1464c8c47873 -a 10.0.0.2 -s 4420 -i 4 00:14:49.689 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.689 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:49.689 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.689 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:49.689 05:09:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:51.596 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.854 [ 0]:0x1 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1a60b0bf505d49049b23579d550bc317 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1a60b0bf505d49049b23579d550bc317 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.854 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.112 [ 0]:0x1 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1a60b0bf505d49049b23579d550bc317 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1a60b0bf505d49049b23579d550bc317 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:52.112 [ 1]:0x2 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:52.112 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.371 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.630 05:09:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:52.630 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:52.630 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc78289e-85d7-402d-aa1f-1464c8c47873 -a 10.0.0.2 -s 4420 -i 4 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:52.888 05:09:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:54.791 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.051 [ 0]:0x2 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.051 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.312 [ 0]:0x1 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1a60b0bf505d49049b23579d550bc317 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1a60b0bf505d49049b23579d550bc317 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.312 [ 1]:0x2 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.312 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:55.572 05:09:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.572 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.831 [ 0]:0x2 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.831 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.090 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:56.090 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bc78289e-85d7-402d-aa1f-1464c8c47873 -a 10.0.0.2 -s 4420 -i 4 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:56.350 05:09:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.258 [ 0]:0x1 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1a60b0bf505d49049b23579d550bc317 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1a60b0bf505d49049b23579d550bc317 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.258 [ 1]:0x2 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.258 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.517 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.776 05:09:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.776 [ 0]:0x2 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:58.776 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:58.777 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:58.777 [2024-12-09 05:09:41.222337] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:58.777 request: 00:14:58.777 { 00:14:58.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.777 "nsid": 2, 00:14:58.777 "host": "nqn.2016-06.io.spdk:host1", 00:14:58.777 "method": "nvmf_ns_remove_host", 00:14:58.777 "req_id": 1 00:14:58.777 } 00:14:58.777 Got JSON-RPC error response 00:14:58.777 response: 00:14:58.777 { 00:14:58.777 "code": -32602, 00:14:58.777 "message": "Invalid parameters" 00:14:58.777 } 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.036 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.037 [ 0]:0x2 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84c2c03c91ea4db6a3ed2cfa6adb55b6 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84c2c03c91ea4db6a3ed2cfa6adb55b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=440157 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 440157 /var/tmp/host.sock 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 440157 ']' 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:59.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.037 05:09:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.037 [2024-12-09 05:09:41.473530] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:14:59.037 [2024-12-09 05:09:41.473581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440157 ] 00:14:59.297 [2024-12-09 05:09:41.570782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.297 [2024-12-09 05:09:41.609574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.866 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.866 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:59.866 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.125 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.383 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c40a0ada-b714-4a1d-9fdd-53ead55656ec 00:15:00.383 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:00.383 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C40A0ADAB7144A1D9FDD53EAD55656EC -i 00:15:00.642 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c86a9f69-b7d2-40c0-8842-64816eca3a49 00:15:00.642 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:00.642 05:09:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C86A9F69B7D240C0884264816ECA3A49 -i 00:15:00.642 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.900 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:01.159 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:01.159 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:01.726 nvme0n1 00:15:01.726 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:01.726 05:09:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:01.985 nvme1n2 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:01.985 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:02.244 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c40a0ada-b714-4a1d-9fdd-53ead55656ec == \c\4\0\a\0\a\d\a\-\b\7\1\4\-\4\a\1\d\-\9\f\d\d\-\5\3\e\a\d\5\5\6\5\6\e\c ]] 00:15:02.244 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:02.244 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:02.244 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:02.502 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c86a9f69-b7d2-40c0-8842-64816eca3a49 == \c\8\6\a\9\f\6\9\-\b\7\d\2\-\4\0\c\0\-\8\8\4\2\-\6\4\8\1\6\e\c\a\3\a\4\9 ]] 00:15:02.502 05:09:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c40a0ada-b714-4a1d-9fdd-53ead55656ec 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C40A0ADAB7144A1D9FDD53EAD55656EC 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C40A0ADAB7144A1D9FDD53EAD55656EC 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.761 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C40A0ADAB7144A1D9FDD53EAD55656EC 00:15:03.019 [2024-12-09 05:09:45.397961] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:03.019 [2024-12-09 05:09:45.397992] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:03.019 [2024-12-09 05:09:45.398008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:03.019 request: 00:15:03.019 { 00:15:03.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.019 "namespace": { 00:15:03.019 "bdev_name": "invalid", 00:15:03.019 "nsid": 1, 00:15:03.019 "nguid": "C40A0ADAB7144A1D9FDD53EAD55656EC", 00:15:03.019 "no_auto_visible": false, 00:15:03.019 "hide_metadata": false 00:15:03.019 }, 00:15:03.019 "method": "nvmf_subsystem_add_ns", 00:15:03.019 "req_id": 1 00:15:03.019 } 00:15:03.019 Got JSON-RPC error response 00:15:03.019 response: 00:15:03.019 { 00:15:03.019 "code": -32602, 00:15:03.019 "message": "Invalid parameters" 00:15:03.019 } 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c40a0ada-b714-4a1d-9fdd-53ead55656ec 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.019 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C40A0ADAB7144A1D9FDD53EAD55656EC -i 00:15:03.277 05:09:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:05.180 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:05.180 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:05.180 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 440157 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 440157 ']' 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 440157 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.440 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440157 00:15:05.700 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:05.700 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:05.700 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440157' 00:15:05.700 killing process with pid 440157 00:15:05.700 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 440157 00:15:05.700 05:09:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 440157 00:15:05.960 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.220 rmmod nvme_tcp 00:15:06.220 rmmod nvme_fabrics 00:15:06.220 rmmod nvme_keyring 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 438124 ']' 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 438124 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 438124 ']' 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 438124 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438124 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438124' 00:15:06.220 killing process with pid 438124 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 438124 00:15:06.220 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 438124 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:06.480 05:09:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:09.019 00:15:09.019 real 0m28.661s 00:15:09.019 user 0m32.641s 00:15:09.019 sys 0m9.258s 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:09.019 ************************************ 00:15:09.019 END TEST nvmf_ns_masking 00:15:09.019 ************************************ 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:09.019 ************************************ 00:15:09.019 START TEST nvmf_nvme_cli 00:15:09.019 ************************************ 00:15:09.019 05:09:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.019 * Looking for test storage... 00:15:09.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.019 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:09.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.020 --rc genhtml_branch_coverage=1 00:15:09.020 --rc genhtml_function_coverage=1 00:15:09.020 --rc genhtml_legend=1 00:15:09.020 --rc geninfo_all_blocks=1 00:15:09.020 --rc geninfo_unexecuted_blocks=1 00:15:09.020 00:15:09.020 ' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:09.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.020 --rc genhtml_branch_coverage=1 00:15:09.020 --rc genhtml_function_coverage=1 00:15:09.020 --rc genhtml_legend=1 00:15:09.020 --rc geninfo_all_blocks=1 00:15:09.020 --rc geninfo_unexecuted_blocks=1 00:15:09.020 00:15:09.020 ' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:09.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.020 --rc genhtml_branch_coverage=1 00:15:09.020 --rc genhtml_function_coverage=1 00:15:09.020 --rc genhtml_legend=1 00:15:09.020 --rc geninfo_all_blocks=1 00:15:09.020 --rc geninfo_unexecuted_blocks=1 00:15:09.020 00:15:09.020 ' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:09.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.020 --rc genhtml_branch_coverage=1 00:15:09.020 --rc genhtml_function_coverage=1 00:15:09.020 --rc genhtml_legend=1 00:15:09.020 --rc geninfo_all_blocks=1 00:15:09.020 --rc geninfo_unexecuted_blocks=1 00:15:09.020 00:15:09.020 ' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.020 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:09.021 05:09:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.144 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:17.145 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:17.145 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:17.145 Found net devices under 0000:af:00.0: cvl_0_0 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:17.145 Found net devices under 0000:af:00.1: cvl_0_1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:17.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:15:17.145 00:15:17.145 --- 10.0.0.2 ping statistics --- 00:15:17.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.145 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:15:17.145 00:15:17.145 --- 10.0.0.1 ping statistics --- 00:15:17.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.145 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.145 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=445230 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 445230 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 445230 ']' 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.146 05:09:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 [2024-12-09 05:09:58.538774] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:15:17.146 [2024-12-09 05:09:58.538827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.146 [2024-12-09 05:09:58.637297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.146 [2024-12-09 05:09:58.681006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.146 [2024-12-09 05:09:58.681045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.146 [2024-12-09 05:09:58.681060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.146 [2024-12-09 05:09:58.681070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.146 [2024-12-09 05:09:58.681080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.146 [2024-12-09 05:09:58.682959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.146 [2024-12-09 05:09:58.683067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.146 [2024-12-09 05:09:58.683173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.146 [2024-12-09 05:09:58.683175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 [2024-12-09 05:09:59.428243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 Malloc0 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 Malloc1 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 [2024-12-09 05:09:59.530140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.146 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:17.406 00:15:17.406 Discovery Log Number of Records 2, Generation counter 2 00:15:17.406 =====Discovery Log Entry 0====== 00:15:17.406 trtype: tcp 00:15:17.406 adrfam: ipv4 00:15:17.406 subtype: current discovery subsystem 00:15:17.406 treq: not required 00:15:17.406 portid: 0 00:15:17.406 trsvcid: 4420 00:15:17.406 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:17.406 traddr: 10.0.0.2 00:15:17.406 eflags: explicit discovery connections, duplicate discovery information 00:15:17.406 sectype: none 00:15:17.406 =====Discovery Log Entry 1====== 00:15:17.406 trtype: tcp 00:15:17.406 adrfam: ipv4 00:15:17.406 subtype: nvme subsystem 00:15:17.406 treq: not required 00:15:17.406 portid: 0 00:15:17.406 trsvcid: 4420 00:15:17.406 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:17.406 traddr: 10.0.0.2 00:15:17.406 eflags: none 00:15:17.406 sectype: none 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:17.406 05:09:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:18.786 05:10:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:20.690 /dev/nvme0n2 ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:20.690 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:20.950 rmmod nvme_tcp 00:15:20.950 rmmod nvme_fabrics 00:15:20.950 rmmod nvme_keyring 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 445230 ']' 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 445230 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 445230 ']' 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 445230 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.950 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 445230 00:15:21.209 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.209 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.209 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 445230' 00:15:21.209 killing process with pid 445230 00:15:21.209 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 445230 00:15:21.209 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 445230 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:21.468 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.469 05:10:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:23.370 00:15:23.370 real 0m14.782s 00:15:23.370 user 0m21.844s 00:15:23.370 sys 0m6.427s 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:23.370 ************************************ 00:15:23.370 END TEST nvmf_nvme_cli 00:15:23.370 ************************************ 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.370 05:10:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.629 ************************************ 00:15:23.629 START TEST nvmf_vfio_user 00:15:23.629 ************************************ 00:15:23.629 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:23.629 * Looking for test storage... 00:15:23.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.629 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.629 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.629 05:10:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.629 --rc genhtml_branch_coverage=1 00:15:23.629 --rc genhtml_function_coverage=1 00:15:23.629 --rc genhtml_legend=1 00:15:23.629 --rc geninfo_all_blocks=1 00:15:23.629 --rc geninfo_unexecuted_blocks=1 00:15:23.629 00:15:23.629 ' 00:15:23.629 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.629 --rc genhtml_branch_coverage=1 00:15:23.630 --rc genhtml_function_coverage=1 00:15:23.630 --rc genhtml_legend=1 00:15:23.630 --rc geninfo_all_blocks=1 00:15:23.630 --rc geninfo_unexecuted_blocks=1 00:15:23.630 00:15:23.630 ' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.630 --rc genhtml_branch_coverage=1 00:15:23.630 --rc genhtml_function_coverage=1 00:15:23.630 --rc genhtml_legend=1 00:15:23.630 --rc geninfo_all_blocks=1 00:15:23.630 --rc geninfo_unexecuted_blocks=1 00:15:23.630 00:15:23.630 ' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.630 --rc genhtml_branch_coverage=1 00:15:23.630 --rc genhtml_function_coverage=1 00:15:23.630 --rc genhtml_legend=1 00:15:23.630 --rc geninfo_all_blocks=1 00:15:23.630 --rc geninfo_unexecuted_blocks=1 00:15:23.630 00:15:23.630 ' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=446698 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 446698' 00:15:23.630 Process pid: 446698 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 446698 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 446698 ']' 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.630 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.889 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.889 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.889 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:23.889 [2024-12-09 05:10:06.146825] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:15:23.889 [2024-12-09 05:10:06.146876] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.889 [2024-12-09 05:10:06.239538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.889 [2024-12-09 05:10:06.280090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.889 [2024-12-09 05:10:06.280134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.889 [2024-12-09 05:10:06.280149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.889 [2024-12-09 05:10:06.280159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.889 [2024-12-09 05:10:06.280169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.889 [2024-12-09 05:10:06.282005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.889 [2024-12-09 05:10:06.282114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.889 [2024-12-09 05:10:06.282266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.889 [2024-12-09 05:10:06.282268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.827 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.827 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:24.827 05:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:25.762 05:10:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:25.762 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:25.762 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:25.762 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.762 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:25.762 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:26.021 Malloc1 00:15:26.021 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:26.280 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:26.540 05:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:26.799 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.799 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:26.799 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:26.799 Malloc2 00:15:26.799 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:27.059 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:27.318 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:27.580 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:27.581 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:27.581 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.581 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:27.581 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:27.581 05:10:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:27.581 [2024-12-09 05:10:09.843056] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:15:27.581 [2024-12-09 05:10:09.843093] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447324 ] 00:15:27.581 [2024-12-09 05:10:09.890026] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:27.581 [2024-12-09 05:10:09.898529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:27.581 [2024-12-09 05:10:09.898551] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa8df968000 00:15:27.581 [2024-12-09 05:10:09.899528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.900522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.901529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.902534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.903538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.904542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.905549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.906561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:27.581 [2024-12-09 05:10:09.907564] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:27.581 [2024-12-09 05:10:09.907575] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa8df95d000 00:15:27.581 [2024-12-09 05:10:09.908628] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:27.581 [2024-12-09 05:10:09.922427] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:27.581 [2024-12-09 05:10:09.922457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:27.581 [2024-12-09 05:10:09.927666] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:27.581 [2024-12-09 05:10:09.927703] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:27.581 [2024-12-09 05:10:09.927778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:27.581 [2024-12-09 05:10:09.927796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:27.581 [2024-12-09 05:10:09.927803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:27.581 [2024-12-09 05:10:09.928663] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:27.581 [2024-12-09 05:10:09.928676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:27.581 [2024-12-09 05:10:09.928685] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:27.581 [2024-12-09 05:10:09.929672] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:27.581 [2024-12-09 05:10:09.929682] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:27.581 [2024-12-09 05:10:09.929692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.930678] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:27.581 [2024-12-09 05:10:09.930693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.931683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:27.581 [2024-12-09 05:10:09.931694] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:27.581 [2024-12-09 05:10:09.931700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.931709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.931818] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:27.581 [2024-12-09 05:10:09.931824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.931830] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:27.581 [2024-12-09 05:10:09.932687] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:27.581 [2024-12-09 05:10:09.933694] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:27.581 [2024-12-09 05:10:09.934700] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:27.581 [2024-12-09 05:10:09.935698] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.581 [2024-12-09 05:10:09.935791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:27.581 [2024-12-09 05:10:09.936710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:27.581 [2024-12-09 05:10:09.936727] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:27.581 [2024-12-09 05:10:09.936734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:27.581 [2024-12-09 05:10:09.936754] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:27.581 [2024-12-09 05:10:09.936768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:27.581 [2024-12-09 05:10:09.936790] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:27.581 [2024-12-09 05:10:09.936797] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:27.581 [2024-12-09 05:10:09.936802] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.581 [2024-12-09 05:10:09.936815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:27.581 [2024-12-09 05:10:09.936866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.936877] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:27.582 [2024-12-09 05:10:09.936883] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:27.582 [2024-12-09 05:10:09.936889] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:27.582 [2024-12-09 05:10:09.936895] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:27.582 [2024-12-09 05:10:09.936901] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:27.582 [2024-12-09 05:10:09.936907] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:27.582 [2024-12-09 05:10:09.936913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.936925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.936937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.936952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.936964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.582 [2024-12-09 05:10:09.936973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.582 [2024-12-09 05:10:09.936982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.582 [2024-12-09 05:10:09.936991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:27.582 [2024-12-09 05:10:09.936997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937035] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:27.582 [2024-12-09 05:10:09.937041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937148] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:27.582 [2024-12-09 05:10:09.937154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:27.582 [2024-12-09 05:10:09.937158] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.582 [2024-12-09 05:10:09.937165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937190] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:27.582 [2024-12-09 05:10:09.937201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:27.582 [2024-12-09 05:10:09.937233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:27.582 [2024-12-09 05:10:09.937237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.582 [2024-12-09 05:10:09.937244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937304] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:27.582 [2024-12-09 05:10:09.937309] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:27.582 [2024-12-09 05:10:09.937314] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.582 [2024-12-09 05:10:09.937320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937388] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:27.582 [2024-12-09 05:10:09.937393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:27.582 [2024-12-09 05:10:09.937400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:27.582 [2024-12-09 05:10:09.937417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:27.582 [2024-12-09 05:10:09.937490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:27.582 [2024-12-09 05:10:09.937504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:27.583 [2024-12-09 05:10:09.937520] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:27.583 [2024-12-09 05:10:09.937526] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:27.583 [2024-12-09 05:10:09.937530] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:27.583 [2024-12-09 05:10:09.937535] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:27.583 [2024-12-09 05:10:09.937539] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:27.583 [2024-12-09 05:10:09.937546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:27.583 [2024-12-09 05:10:09.937554] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:27.583 [2024-12-09 05:10:09.937560] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:27.583 [2024-12-09 05:10:09.937564] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.583 [2024-12-09 05:10:09.937571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:27.583 [2024-12-09 05:10:09.937579] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:27.583 [2024-12-09 05:10:09.937584] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:27.583 [2024-12-09 05:10:09.937588] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.583 [2024-12-09 05:10:09.937595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:27.583 [2024-12-09 05:10:09.937603] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:27.583 [2024-12-09 05:10:09.937609] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:27.583 [2024-12-09 05:10:09.937613] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:27.583 [2024-12-09 05:10:09.937620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:27.583 [2024-12-09 05:10:09.937627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:27.583 [2024-12-09 05:10:09.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:27.583 [2024-12-09 05:10:09.937654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:27.583 [2024-12-09 05:10:09.937663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:27.583 ===================================================== 00:15:27.583 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.583 ===================================================== 00:15:27.583 Controller Capabilities/Features 00:15:27.583 ================================ 00:15:27.583 Vendor ID: 4e58 00:15:27.583 Subsystem Vendor ID: 4e58 00:15:27.583 Serial Number: SPDK1 00:15:27.583 Model Number: SPDK bdev Controller 00:15:27.583 Firmware Version: 25.01 00:15:27.583 Recommended Arb Burst: 6 00:15:27.583 IEEE OUI Identifier: 8d 6b 50 00:15:27.583 Multi-path I/O 00:15:27.583 May have multiple subsystem ports: Yes 00:15:27.583 May have multiple controllers: Yes 00:15:27.583 Associated with SR-IOV VF: No 00:15:27.583 Max Data Transfer Size: 131072 00:15:27.583 Max Number of Namespaces: 32 00:15:27.583 Max Number of I/O Queues: 127 00:15:27.583 NVMe Specification Version (VS): 1.3 00:15:27.583 NVMe Specification Version (Identify): 1.3 00:15:27.583 Maximum Queue Entries: 256 00:15:27.583 Contiguous Queues Required: Yes 00:15:27.583 Arbitration Mechanisms Supported 00:15:27.583 Weighted Round Robin: Not Supported 00:15:27.583 Vendor Specific: Not Supported 00:15:27.583 Reset Timeout: 15000 ms 00:15:27.583 Doorbell Stride: 4 bytes 00:15:27.583 NVM Subsystem Reset: Not Supported 00:15:27.583 Command Sets Supported 00:15:27.583 NVM Command Set: Supported 00:15:27.583 Boot Partition: Not Supported 00:15:27.583 Memory Page Size Minimum: 4096 bytes 00:15:27.583 Memory Page Size Maximum: 4096 bytes 00:15:27.583 Persistent Memory Region: Not Supported 00:15:27.583 Optional Asynchronous Events Supported 00:15:27.583 Namespace Attribute Notices: Supported 00:15:27.583 Firmware Activation Notices: Not Supported 00:15:27.583 ANA Change Notices: Not Supported 00:15:27.583 PLE Aggregate Log Change Notices: Not Supported 00:15:27.583 LBA Status Info Alert Notices: Not Supported 00:15:27.583 EGE Aggregate Log Change Notices: Not Supported 00:15:27.583 Normal NVM Subsystem Shutdown event: Not Supported 00:15:27.583 Zone Descriptor Change Notices: Not Supported 00:15:27.583 Discovery Log Change Notices: Not Supported 00:15:27.583 Controller Attributes 00:15:27.583 128-bit Host Identifier: Supported 00:15:27.583 Non-Operational Permissive Mode: Not Supported 00:15:27.583 NVM Sets: Not Supported 00:15:27.583 Read Recovery Levels: Not Supported 00:15:27.583 Endurance Groups: Not Supported 00:15:27.583 Predictable Latency Mode: Not Supported 00:15:27.583 Traffic Based Keep ALive: Not Supported 00:15:27.583 Namespace Granularity: Not Supported 00:15:27.583 SQ Associations: Not Supported 00:15:27.583 UUID List: Not Supported 00:15:27.583 Multi-Domain Subsystem: Not Supported 00:15:27.583 Fixed Capacity Management: Not Supported 00:15:27.583 Variable Capacity Management: Not Supported 00:15:27.583 Delete Endurance Group: Not Supported 00:15:27.583 Delete NVM Set: Not Supported 00:15:27.583 Extended LBA Formats Supported: Not Supported 00:15:27.583 Flexible Data Placement Supported: Not Supported 00:15:27.583 00:15:27.583 Controller Memory Buffer Support 00:15:27.583 ================================ 00:15:27.583 Supported: No 00:15:27.583 00:15:27.583 Persistent Memory Region Support 00:15:27.583 ================================ 00:15:27.583 Supported: No 00:15:27.583 00:15:27.583 Admin Command Set Attributes 00:15:27.583 ============================ 00:15:27.583 Security Send/Receive: Not Supported 00:15:27.583 Format NVM: Not Supported 00:15:27.583 Firmware Activate/Download: Not Supported 00:15:27.583 Namespace Management: Not Supported 00:15:27.583 Device Self-Test: Not Supported 00:15:27.583 Directives: Not Supported 00:15:27.583 NVMe-MI: Not Supported 00:15:27.583 Virtualization Management: Not Supported 00:15:27.583 Doorbell Buffer Config: Not Supported 00:15:27.583 Get LBA Status Capability: Not Supported 00:15:27.583 Command & Feature Lockdown Capability: Not Supported 00:15:27.583 Abort Command Limit: 4 00:15:27.583 Async Event Request Limit: 4 00:15:27.583 Number of Firmware Slots: N/A 00:15:27.583 Firmware Slot 1 Read-Only: N/A 00:15:27.583 Firmware Activation Without Reset: N/A 00:15:27.584 Multiple Update Detection Support: N/A 00:15:27.584 Firmware Update Granularity: No Information Provided 00:15:27.584 Per-Namespace SMART Log: No 00:15:27.584 Asymmetric Namespace Access Log Page: Not Supported 00:15:27.584 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:27.584 Command Effects Log Page: Supported 00:15:27.584 Get Log Page Extended Data: Supported 00:15:27.584 Telemetry Log Pages: Not Supported 00:15:27.584 Persistent Event Log Pages: Not Supported 00:15:27.584 Supported Log Pages Log Page: May Support 00:15:27.584 Commands Supported & Effects Log Page: Not Supported 00:15:27.584 Feature Identifiers & Effects Log Page:May Support 00:15:27.584 NVMe-MI Commands & Effects Log Page: May Support 00:15:27.584 Data Area 4 for Telemetry Log: Not Supported 00:15:27.584 Error Log Page Entries Supported: 128 00:15:27.584 Keep Alive: Supported 00:15:27.584 Keep Alive Granularity: 10000 ms 00:15:27.584 00:15:27.584 NVM Command Set Attributes 00:15:27.584 ========================== 00:15:27.584 Submission Queue Entry Size 00:15:27.584 Max: 64 00:15:27.584 Min: 64 00:15:27.584 Completion Queue Entry Size 00:15:27.584 Max: 16 00:15:27.584 Min: 16 00:15:27.584 Number of Namespaces: 32 00:15:27.584 Compare Command: Supported 00:15:27.584 Write Uncorrectable Command: Not Supported 00:15:27.584 Dataset Management Command: Supported 00:15:27.584 Write Zeroes Command: Supported 00:15:27.584 Set Features Save Field: Not Supported 00:15:27.584 Reservations: Not Supported 00:15:27.584 Timestamp: Not Supported 00:15:27.584 Copy: Supported 00:15:27.584 Volatile Write Cache: Present 00:15:27.584 Atomic Write Unit (Normal): 1 00:15:27.584 Atomic Write Unit (PFail): 1 00:15:27.584 Atomic Compare & Write Unit: 1 00:15:27.584 Fused Compare & Write: Supported 00:15:27.584 Scatter-Gather List 00:15:27.584 SGL Command Set: Supported (Dword aligned) 00:15:27.584 SGL Keyed: Not Supported 00:15:27.584 SGL Bit Bucket Descriptor: Not Supported 00:15:27.584 SGL Metadata Pointer: Not Supported 00:15:27.584 Oversized SGL: Not Supported 00:15:27.584 SGL Metadata Address: Not Supported 00:15:27.584 SGL Offset: Not Supported 00:15:27.584 Transport SGL Data Block: Not Supported 00:15:27.584 Replay Protected Memory Block: Not Supported 00:15:27.584 00:15:27.584 Firmware Slot Information 00:15:27.584 ========================= 00:15:27.584 Active slot: 1 00:15:27.584 Slot 1 Firmware Revision: 25.01 00:15:27.584 00:15:27.584 00:15:27.584 Commands Supported and Effects 00:15:27.584 ============================== 00:15:27.584 Admin Commands 00:15:27.584 -------------- 00:15:27.584 Get Log Page (02h): Supported 00:15:27.584 Identify (06h): Supported 00:15:27.584 Abort (08h): Supported 00:15:27.584 Set Features (09h): Supported 00:15:27.584 Get Features (0Ah): Supported 00:15:27.584 Asynchronous Event Request (0Ch): Supported 00:15:27.584 Keep Alive (18h): Supported 00:15:27.584 I/O Commands 00:15:27.584 ------------ 00:15:27.584 Flush (00h): Supported LBA-Change 00:15:27.584 Write (01h): Supported LBA-Change 00:15:27.584 Read (02h): Supported 00:15:27.584 Compare (05h): Supported 00:15:27.584 Write Zeroes (08h): Supported LBA-Change 00:15:27.584 Dataset Management (09h): Supported LBA-Change 00:15:27.584 Copy (19h): Supported LBA-Change 00:15:27.584 00:15:27.584 Error Log 00:15:27.584 ========= 00:15:27.584 00:15:27.584 Arbitration 00:15:27.584 =========== 00:15:27.584 Arbitration Burst: 1 00:15:27.584 00:15:27.584 Power Management 00:15:27.584 ================ 00:15:27.584 Number of Power States: 1 00:15:27.584 Current Power State: Power State #0 00:15:27.584 Power State #0: 00:15:27.584 Max Power: 0.00 W 00:15:27.584 Non-Operational State: Operational 00:15:27.584 Entry Latency: Not Reported 00:15:27.584 Exit Latency: Not Reported 00:15:27.584 Relative Read Throughput: 0 00:15:27.584 Relative Read Latency: 0 00:15:27.584 Relative Write Throughput: 0 00:15:27.584 Relative Write Latency: 0 00:15:27.584 Idle Power: Not Reported 00:15:27.584 Active Power: Not Reported 00:15:27.584 Non-Operational Permissive Mode: Not Supported 00:15:27.584 00:15:27.584 Health Information 00:15:27.584 ================== 00:15:27.584 Critical Warnings: 00:15:27.584 Available Spare Space: OK 00:15:27.584 Temperature: OK 00:15:27.584 Device Reliability: OK 00:15:27.584 Read Only: No 00:15:27.584 Volatile Memory Backup: OK 00:15:27.584 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:27.584 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:27.584 Available Spare: 0% 00:15:27.584 Available Sp[2024-12-09 05:10:09.937753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:27.584 [2024-12-09 05:10:09.937766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:27.584 [2024-12-09 05:10:09.937798] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:27.584 [2024-12-09 05:10:09.937809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.584 [2024-12-09 05:10:09.937817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.584 [2024-12-09 05:10:09.937824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.584 [2024-12-09 05:10:09.937832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:27.584 [2024-12-09 05:10:09.941218] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:27.584 [2024-12-09 05:10:09.941232] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:27.584 [2024-12-09 05:10:09.941740] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.584 [2024-12-09 05:10:09.941795] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:27.584 [2024-12-09 05:10:09.941806] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:27.584 [2024-12-09 05:10:09.942743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:27.584 [2024-12-09 05:10:09.942758] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:27.585 [2024-12-09 05:10:09.942808] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:27.585 [2024-12-09 05:10:09.944782] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:27.845 are Threshold: 0% 00:15:27.845 Life Percentage Used: 0% 00:15:27.845 Data Units Read: 0 00:15:27.845 Data Units Written: 0 00:15:27.845 Host Read Commands: 0 00:15:27.845 Host Write Commands: 0 00:15:27.845 Controller Busy Time: 0 minutes 00:15:27.845 Power Cycles: 0 00:15:27.845 Power On Hours: 0 hours 00:15:27.845 Unsafe Shutdowns: 0 00:15:27.845 Unrecoverable Media Errors: 0 00:15:27.845 Lifetime Error Log Entries: 0 00:15:27.845 Warning Temperature Time: 0 minutes 00:15:27.845 Critical Temperature Time: 0 minutes 00:15:27.845 00:15:27.845 Number of Queues 00:15:27.845 ================ 00:15:27.845 Number of I/O Submission Queues: 127 00:15:27.845 Number of I/O Completion Queues: 127 00:15:27.845 00:15:27.845 Active Namespaces 00:15:27.845 ================= 00:15:27.845 Namespace ID:1 00:15:27.845 Error Recovery Timeout: Unlimited 00:15:27.845 Command Set Identifier: NVM (00h) 00:15:27.845 Deallocate: Supported 00:15:27.845 Deallocated/Unwritten Error: Not Supported 00:15:27.845 Deallocated Read Value: Unknown 00:15:27.845 Deallocate in Write Zeroes: Not Supported 00:15:27.845 Deallocated Guard Field: 0xFFFF 00:15:27.845 Flush: Supported 00:15:27.845 Reservation: Supported 00:15:27.845 Namespace Sharing Capabilities: Multiple Controllers 00:15:27.845 Size (in LBAs): 131072 (0GiB) 00:15:27.845 Capacity (in LBAs): 131072 (0GiB) 00:15:27.845 Utilization (in LBAs): 131072 (0GiB) 00:15:27.845 NGUID: E49841F702424679AE41549CF3071135 00:15:27.845 UUID: e49841f7-0242-4679-ae41-549cf3071135 00:15:27.845 Thin Provisioning: Not Supported 00:15:27.845 Per-NS Atomic Units: Yes 00:15:27.845 Atomic Boundary Size (Normal): 0 00:15:27.845 Atomic Boundary Size (PFail): 0 00:15:27.845 Atomic Boundary Offset: 0 00:15:27.845 Maximum Single Source Range Length: 65535 00:15:27.845 Maximum Copy Length: 65535 00:15:27.845 Maximum Source Range Count: 1 00:15:27.845 NGUID/EUI64 Never Reused: No 00:15:27.845 Namespace Write Protected: No 00:15:27.845 Number of LBA Formats: 1 00:15:27.845 Current LBA Format: LBA Format #00 00:15:27.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:27.845 00:15:27.845 05:10:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:27.845 [2024-12-09 05:10:10.270134] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.111 Initializing NVMe Controllers 00:15:33.111 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.111 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:33.111 Initialization complete. Launching workers. 00:15:33.111 ======================================================== 00:15:33.111 Latency(us) 00:15:33.111 Device Information : IOPS MiB/s Average min max 00:15:33.111 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39964.39 156.11 3203.43 953.34 9638.99 00:15:33.112 ======================================================== 00:15:33.112 Total : 39964.39 156.11 3203.43 953.34 9638.99 00:15:33.112 00:15:33.112 [2024-12-09 05:10:15.291417] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.112 05:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:33.372 [2024-12-09 05:10:15.611732] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:38.645 Initializing NVMe Controllers 00:15:38.646 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:38.646 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:38.646 Initialization complete. Launching workers. 00:15:38.646 ======================================================== 00:15:38.646 Latency(us) 00:15:38.646 Device Information : IOPS MiB/s Average min max 00:15:38.646 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.02 62.65 7986.43 3990.32 11972.14 00:15:38.646 ======================================================== 00:15:38.646 Total : 16038.02 62.65 7986.43 3990.32 11972.14 00:15:38.646 00:15:38.646 [2024-12-09 05:10:20.651253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:38.646 05:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:38.646 [2024-12-09 05:10:20.969562] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:43.918 [2024-12-09 05:10:26.038499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:43.918 Initializing NVMe Controllers 00:15:43.918 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:43.918 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:43.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:43.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:43.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:43.918 Initialization complete. Launching workers. 00:15:43.918 Starting thread on core 2 00:15:43.918 Starting thread on core 3 00:15:43.918 Starting thread on core 1 00:15:43.918 05:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:44.178 [2024-12-09 05:10:26.438652] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.468 [2024-12-09 05:10:29.504432] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.468 Initializing NVMe Controllers 00:15:47.468 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.468 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:47.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:47.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:47.468 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:47.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:47.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:47.468 Initialization complete. Launching workers. 00:15:47.468 Starting thread on core 1 with urgent priority queue 00:15:47.468 Starting thread on core 2 with urgent priority queue 00:15:47.468 Starting thread on core 3 with urgent priority queue 00:15:47.468 Starting thread on core 0 with urgent priority queue 00:15:47.468 SPDK bdev Controller (SPDK1 ) core 0: 7786.33 IO/s 12.84 secs/100000 ios 00:15:47.468 SPDK bdev Controller (SPDK1 ) core 1: 7267.33 IO/s 13.76 secs/100000 ios 00:15:47.468 SPDK bdev Controller (SPDK1 ) core 2: 8626.33 IO/s 11.59 secs/100000 ios 00:15:47.468 SPDK bdev Controller (SPDK1 ) core 3: 7373.00 IO/s 13.56 secs/100000 ios 00:15:47.468 ======================================================== 00:15:47.468 00:15:47.468 05:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:47.468 [2024-12-09 05:10:29.900667] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.468 Initializing NVMe Controllers 00:15:47.468 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.468 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.468 Namespace ID: 1 size: 0GB 00:15:47.468 Initialization complete. 00:15:47.468 INFO: using host memory buffer for IO 00:15:47.468 Hello world! 00:15:47.468 [2024-12-09 05:10:29.935041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.727 05:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:47.985 [2024-12-09 05:10:30.333670] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.921 Initializing NVMe Controllers 00:15:48.921 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.921 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.921 Initialization complete. Launching workers. 00:15:48.921 submit (in ns) avg, min, max = 5441.5, 3100.0, 4000024.0 00:15:48.921 complete (in ns) avg, min, max = 21768.7, 1700.0, 7988336.0 00:15:48.921 00:15:48.921 Submit histogram 00:15:48.921 ================ 00:15:48.921 Range in us Cumulative Count 00:15:48.921 3.098 - 3.110: 0.1075% ( 18) 00:15:48.921 3.110 - 3.123: 0.6511% ( 91) 00:15:48.921 3.123 - 3.136: 2.3354% ( 282) 00:15:48.921 3.136 - 3.149: 5.1666% ( 474) 00:15:48.921 3.149 - 3.162: 8.6728% ( 587) 00:15:48.921 3.162 - 3.174: 12.8897% ( 706) 00:15:48.921 3.174 - 3.187: 18.9344% ( 1012) 00:15:48.921 3.187 - 3.200: 24.8955% ( 998) 00:15:48.921 3.200 - 3.213: 30.7610% ( 982) 00:15:48.921 3.213 - 3.226: 37.3850% ( 1109) 00:15:48.921 3.226 - 3.238: 44.6004% ( 1208) 00:15:48.921 3.238 - 3.251: 51.1767% ( 1101) 00:15:48.921 3.251 - 3.264: 55.6982% ( 757) 00:15:48.921 3.264 - 3.277: 58.5474% ( 477) 00:15:48.921 3.277 - 3.302: 64.4845% ( 994) 00:15:48.921 3.302 - 3.328: 68.9941% ( 755) 00:15:48.921 3.328 - 3.354: 73.9159% ( 824) 00:15:48.921 3.354 - 3.379: 84.0999% ( 1705) 00:15:48.921 3.379 - 3.405: 87.7494% ( 611) 00:15:48.921 3.405 - 3.430: 89.0515% ( 218) 00:15:48.921 3.430 - 3.456: 89.7981% ( 125) 00:15:48.921 3.456 - 3.482: 90.7359% ( 157) 00:15:48.921 3.482 - 3.507: 92.0619% ( 222) 00:15:48.921 3.507 - 3.533: 93.7761% ( 287) 00:15:48.921 3.533 - 3.558: 95.1798% ( 235) 00:15:48.921 3.558 - 3.584: 96.1713% ( 166) 00:15:48.921 3.584 - 3.610: 97.1210% ( 159) 00:15:48.921 3.610 - 3.635: 98.2439% ( 188) 00:15:48.921 3.635 - 3.661: 98.8054% ( 94) 00:15:48.921 3.661 - 3.686: 99.1160% ( 52) 00:15:48.921 3.686 - 3.712: 99.3848% ( 45) 00:15:48.921 3.712 - 3.738: 99.5879% ( 34) 00:15:48.921 3.738 - 3.763: 99.6834% ( 16) 00:15:48.921 3.763 - 3.789: 99.7013% ( 3) 00:15:48.921 4.019 - 4.045: 99.7073% ( 1) 00:15:48.921 4.096 - 4.122: 99.7193% ( 2) 00:15:48.921 5.555 - 5.581: 99.7252% ( 1) 00:15:48.921 5.581 - 5.606: 99.7312% ( 1) 00:15:48.921 5.606 - 5.632: 99.7372% ( 1) 00:15:48.921 5.658 - 5.683: 99.7432% ( 1) 00:15:48.921 5.811 - 5.837: 99.7491% ( 1) 00:15:48.922 5.862 - 5.888: 99.7611% ( 2) 00:15:48.922 5.888 - 5.914: 99.7730% ( 2) 00:15:48.922 5.939 - 5.965: 99.7790% ( 1) 00:15:48.922 6.016 - 6.042: 99.7850% ( 1) 00:15:48.922 6.400 - 6.426: 99.7909% ( 1) 00:15:48.922 6.426 - 6.451: 99.8029% ( 2) 00:15:48.922 6.451 - 6.477: 99.8089% ( 1) 00:15:48.922 6.477 - 6.502: 99.8148% ( 1) 00:15:48.922 6.554 - 6.605: 99.8208% ( 1) 00:15:48.922 6.605 - 6.656: 99.8268% ( 1) 00:15:48.922 6.810 - 6.861: 99.8328% ( 1) 00:15:48.922 6.963 - 7.014: 99.8387% ( 1) 00:15:48.922 7.066 - 7.117: 99.8447% ( 1) 00:15:48.922 7.168 - 7.219: 99.8507% ( 1) 00:15:48.922 7.219 - 7.270: 99.8626% ( 2) 00:15:48.922 7.475 - 7.526: 99.8686% ( 1) 00:15:48.922 7.526 - 7.578: 99.8746% ( 1) 00:15:48.922 7.629 - 7.680: 99.8805% ( 1) 00:15:48.922 7.680 - 7.731: 99.8925% ( 2) 00:15:48.922 7.834 - 7.885: 99.8985% ( 1) 00:15:48.922 7.987 - 8.038: 99.9044% ( 1) 00:15:48.922 8.038 - 8.090: 99.9164% ( 2) 00:15:48.922 8.243 - 8.294: 99.9224% ( 1) 00:15:48.922 8.499 - 8.550: 99.9283% ( 1) 00:15:48.922 8.806 - 8.858: 99.9343% ( 1) 00:15:48.922 9.267 - 9.318: 99.9403% ( 1) 00:15:48.922 10.035 - 10.086: 99.9462% ( 1) 00:15:48.922 3984.589 - 4010.803: 100.0000% ( 9) 00:15:48.922 00:15:48.922 Complete histogram 00:15:48.922 ================== 00:15:48.922 Range in us Cumulative Count 00:15:48.922 1.690 - 1.702: 0.0060% ( 1) 00:15:48.922 1.702 - 1.715: 3.3090% ( 553) 00:15:48.922 1.715 - 1.728: 28.1089% ( 4152) 00:15:48.922 1.728 - 1.741: 39.5532% ( 1916) 00:15:48.922 1.741 - 1.754: 42.8503% ( 552) 00:15:48.922 1.754 - 1.766: 57.9620% ( 2530) 00:15:48.922 1.766 - 1.779: 84.9122% ( 4512) 00:15:48.922 1.779 - 1.792: 92.0798% ( 1200) 00:15:48.922 1.792 - [2024-12-09 05:10:31.352771] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.180 1.805: 95.5023% ( 573) 00:15:49.180 1.805 - 1.818: 96.6611% ( 194) 00:15:49.180 1.818 - 1.830: 97.2226% ( 94) 00:15:49.180 1.830 - 1.843: 98.0409% ( 137) 00:15:49.180 1.843 - 1.856: 98.8233% ( 131) 00:15:49.180 1.856 - 1.869: 99.1578% ( 56) 00:15:49.180 1.869 - 1.882: 99.2474% ( 15) 00:15:49.180 1.882 - 1.894: 99.2892% ( 7) 00:15:49.181 1.894 - 1.907: 99.3012% ( 2) 00:15:49.181 1.907 - 1.920: 99.3131% ( 2) 00:15:49.181 2.035 - 2.048: 99.3191% ( 1) 00:15:49.181 2.048 - 2.061: 99.3251% ( 1) 00:15:49.181 2.266 - 2.278: 99.3310% ( 1) 00:15:49.181 4.122 - 4.147: 99.3370% ( 1) 00:15:49.181 4.250 - 4.275: 99.3430% ( 1) 00:15:49.181 4.275 - 4.301: 99.3489% ( 1) 00:15:49.181 4.352 - 4.378: 99.3549% ( 1) 00:15:49.181 4.480 - 4.506: 99.3609% ( 1) 00:15:49.181 4.582 - 4.608: 99.3669% ( 1) 00:15:49.181 4.890 - 4.915: 99.3728% ( 1) 00:15:49.181 4.915 - 4.941: 99.3788% ( 1) 00:15:49.181 4.966 - 4.992: 99.3848% ( 1) 00:15:49.181 5.043 - 5.069: 99.3908% ( 1) 00:15:49.181 5.222 - 5.248: 99.3967% ( 1) 00:15:49.181 5.299 - 5.325: 99.4087% ( 2) 00:15:49.181 5.350 - 5.376: 99.4146% ( 1) 00:15:49.181 5.530 - 5.555: 99.4206% ( 1) 00:15:49.181 5.555 - 5.581: 99.4266% ( 1) 00:15:49.181 5.862 - 5.888: 99.4326% ( 1) 00:15:49.181 5.965 - 5.990: 99.4385% ( 1) 00:15:49.181 6.093 - 6.118: 99.4445% ( 1) 00:15:49.181 6.170 - 6.195: 99.4505% ( 1) 00:15:49.181 6.451 - 6.477: 99.4624% ( 2) 00:15:49.181 6.528 - 6.554: 99.4684% ( 1) 00:15:49.181 6.554 - 6.605: 99.4744% ( 1) 00:15:49.181 6.758 - 6.810: 99.4803% ( 1) 00:15:49.181 7.168 - 7.219: 99.4863% ( 1) 00:15:49.181 7.475 - 7.526: 99.4923% ( 1) 00:15:49.181 8.038 - 8.090: 99.4983% ( 1) 00:15:49.181 10.291 - 10.342: 99.5042% ( 1) 00:15:49.181 10.803 - 10.854: 99.5102% ( 1) 00:15:49.181 47.309 - 47.514: 99.5162% ( 1) 00:15:49.181 170.394 - 171.213: 99.5222% ( 1) 00:15:49.181 3984.589 - 4010.803: 99.9701% ( 75) 00:15:49.181 4141.875 - 4168.090: 99.9761% ( 1) 00:15:49.181 6973.030 - 7025.459: 99.9821% ( 1) 00:15:49.181 7969.178 - 8021.606: 100.0000% ( 3) 00:15:49.181 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:49.181 [ 00:15:49.181 { 00:15:49.181 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:49.181 "subtype": "Discovery", 00:15:49.181 "listen_addresses": [], 00:15:49.181 "allow_any_host": true, 00:15:49.181 "hosts": [] 00:15:49.181 }, 00:15:49.181 { 00:15:49.181 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:49.181 "subtype": "NVMe", 00:15:49.181 "listen_addresses": [ 00:15:49.181 { 00:15:49.181 "trtype": "VFIOUSER", 00:15:49.181 "adrfam": "IPv4", 00:15:49.181 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:49.181 "trsvcid": "0" 00:15:49.181 } 00:15:49.181 ], 00:15:49.181 "allow_any_host": true, 00:15:49.181 "hosts": [], 00:15:49.181 "serial_number": "SPDK1", 00:15:49.181 "model_number": "SPDK bdev Controller", 00:15:49.181 "max_namespaces": 32, 00:15:49.181 "min_cntlid": 1, 00:15:49.181 "max_cntlid": 65519, 00:15:49.181 "namespaces": [ 00:15:49.181 { 00:15:49.181 "nsid": 1, 00:15:49.181 "bdev_name": "Malloc1", 00:15:49.181 "name": "Malloc1", 00:15:49.181 "nguid": "E49841F702424679AE41549CF3071135", 00:15:49.181 "uuid": "e49841f7-0242-4679-ae41-549cf3071135" 00:15:49.181 } 00:15:49.181 ] 00:15:49.181 }, 00:15:49.181 { 00:15:49.181 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:49.181 "subtype": "NVMe", 00:15:49.181 "listen_addresses": [ 00:15:49.181 { 00:15:49.181 "trtype": "VFIOUSER", 00:15:49.181 "adrfam": "IPv4", 00:15:49.181 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:49.181 "trsvcid": "0" 00:15:49.181 } 00:15:49.181 ], 00:15:49.181 "allow_any_host": true, 00:15:49.181 "hosts": [], 00:15:49.181 "serial_number": "SPDK2", 00:15:49.181 "model_number": "SPDK bdev Controller", 00:15:49.181 "max_namespaces": 32, 00:15:49.181 "min_cntlid": 1, 00:15:49.181 "max_cntlid": 65519, 00:15:49.181 "namespaces": [ 00:15:49.181 { 00:15:49.181 "nsid": 1, 00:15:49.181 "bdev_name": "Malloc2", 00:15:49.181 "name": "Malloc2", 00:15:49.181 "nguid": "16714CF943DC40CB80C661FE37B90FFE", 00:15:49.181 "uuid": "16714cf9-43dc-40cb-80c6-61fe37b90ffe" 00:15:49.181 } 00:15:49.181 ] 00:15:49.181 } 00:15:49.181 ] 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=451000 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:15:49.181 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:49.439 [2024-12-09 05:10:31.783625] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:49.439 05:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:49.697 Malloc3 00:15:49.697 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:49.956 [2024-12-09 05:10:32.201791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:49.956 Asynchronous Event Request test 00:15:49.956 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:49.956 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:49.956 Registering asynchronous event callbacks... 00:15:49.956 Starting namespace attribute notice tests for all controllers... 00:15:49.956 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:49.956 aer_cb - Changed Namespace 00:15:49.956 Cleaning up... 00:15:49.956 [ 00:15:49.956 { 00:15:49.956 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:49.956 "subtype": "Discovery", 00:15:49.956 "listen_addresses": [], 00:15:49.956 "allow_any_host": true, 00:15:49.956 "hosts": [] 00:15:49.956 }, 00:15:49.956 { 00:15:49.956 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:49.956 "subtype": "NVMe", 00:15:49.956 "listen_addresses": [ 00:15:49.956 { 00:15:49.956 "trtype": "VFIOUSER", 00:15:49.956 "adrfam": "IPv4", 00:15:49.956 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:49.956 "trsvcid": "0" 00:15:49.956 } 00:15:49.956 ], 00:15:49.956 "allow_any_host": true, 00:15:49.956 "hosts": [], 00:15:49.956 "serial_number": "SPDK1", 00:15:49.956 "model_number": "SPDK bdev Controller", 00:15:49.956 "max_namespaces": 32, 00:15:49.956 "min_cntlid": 1, 00:15:49.956 "max_cntlid": 65519, 00:15:49.956 "namespaces": [ 00:15:49.956 { 00:15:49.956 "nsid": 1, 00:15:49.956 "bdev_name": "Malloc1", 00:15:49.956 "name": "Malloc1", 00:15:49.956 "nguid": "E49841F702424679AE41549CF3071135", 00:15:49.956 "uuid": "e49841f7-0242-4679-ae41-549cf3071135" 00:15:49.956 }, 00:15:49.956 { 00:15:49.956 "nsid": 2, 00:15:49.956 "bdev_name": "Malloc3", 00:15:49.956 "name": "Malloc3", 00:15:49.956 "nguid": "4F05F21F878C4421B9FD884BE833B445", 00:15:49.956 "uuid": "4f05f21f-878c-4421-b9fd-884be833b445" 00:15:49.956 } 00:15:49.956 ] 00:15:49.956 }, 00:15:49.956 { 00:15:49.956 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:49.956 "subtype": "NVMe", 00:15:49.956 "listen_addresses": [ 00:15:49.956 { 00:15:49.956 "trtype": "VFIOUSER", 00:15:49.956 "adrfam": "IPv4", 00:15:49.956 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:49.956 "trsvcid": "0" 00:15:49.956 } 00:15:49.956 ], 00:15:49.956 "allow_any_host": true, 00:15:49.956 "hosts": [], 00:15:49.956 "serial_number": "SPDK2", 00:15:49.956 "model_number": "SPDK bdev Controller", 00:15:49.956 "max_namespaces": 32, 00:15:49.956 "min_cntlid": 1, 00:15:49.956 "max_cntlid": 65519, 00:15:49.956 "namespaces": [ 00:15:49.956 { 00:15:49.956 "nsid": 1, 00:15:49.956 "bdev_name": "Malloc2", 00:15:49.956 "name": "Malloc2", 00:15:49.956 "nguid": "16714CF943DC40CB80C661FE37B90FFE", 00:15:49.956 "uuid": "16714cf9-43dc-40cb-80c6-61fe37b90ffe" 00:15:49.956 } 00:15:49.956 ] 00:15:49.956 } 00:15:49.956 ] 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 451000 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:49.956 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:50.217 [2024-12-09 05:10:32.434517] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:15:50.217 [2024-12-09 05:10:32.434545] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451247 ] 00:15:50.217 [2024-12-09 05:10:32.480392] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:50.217 [2024-12-09 05:10:32.487439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.217 [2024-12-09 05:10:32.487464] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc4a0df3000 00:15:50.217 [2024-12-09 05:10:32.488439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.489450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.490459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.491460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.492467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.493476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.494480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.495489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.217 [2024-12-09 05:10:32.496500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.217 [2024-12-09 05:10:32.496512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc4a0de8000 00:15:50.217 [2024-12-09 05:10:32.497565] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.217 [2024-12-09 05:10:32.510423] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:50.217 [2024-12-09 05:10:32.510451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:50.217 [2024-12-09 05:10:32.515536] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:50.217 [2024-12-09 05:10:32.515583] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:50.217 [2024-12-09 05:10:32.515656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:50.217 [2024-12-09 05:10:32.515673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:50.217 [2024-12-09 05:10:32.515680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:50.217 [2024-12-09 05:10:32.516540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:50.217 [2024-12-09 05:10:32.516554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:50.217 [2024-12-09 05:10:32.516564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:50.217 [2024-12-09 05:10:32.517548] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:50.217 [2024-12-09 05:10:32.517559] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:50.217 [2024-12-09 05:10:32.517569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.518551] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:50.217 [2024-12-09 05:10:32.518563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.519554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:50.217 [2024-12-09 05:10:32.519565] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:50.217 [2024-12-09 05:10:32.519571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.519580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.519687] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:50.217 [2024-12-09 05:10:32.519693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.519700] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:50.217 [2024-12-09 05:10:32.520560] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:50.217 [2024-12-09 05:10:32.521563] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:50.217 [2024-12-09 05:10:32.522567] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:50.217 [2024-12-09 05:10:32.523564] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.217 [2024-12-09 05:10:32.523613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:50.217 [2024-12-09 05:10:32.524584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:50.217 [2024-12-09 05:10:32.524600] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:50.217 [2024-12-09 05:10:32.524607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:50.217 [2024-12-09 05:10:32.524627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:50.217 [2024-12-09 05:10:32.524637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:50.217 [2024-12-09 05:10:32.524654] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.217 [2024-12-09 05:10:32.524660] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.217 [2024-12-09 05:10:32.524665] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.524679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.532216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.532231] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:50.218 [2024-12-09 05:10:32.532238] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:50.218 [2024-12-09 05:10:32.532244] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:50.218 [2024-12-09 05:10:32.532250] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:50.218 [2024-12-09 05:10:32.532256] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:50.218 [2024-12-09 05:10:32.532262] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:50.218 [2024-12-09 05:10:32.532268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.532278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.532290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.540214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.540229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.218 [2024-12-09 05:10:32.540238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.218 [2024-12-09 05:10:32.540247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.218 [2024-12-09 05:10:32.540260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.218 [2024-12-09 05:10:32.540266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.540277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.540288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.548215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.548227] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:50.218 [2024-12-09 05:10:32.548234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.548248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.548255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.548265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.556275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.556285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.556294] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:50.218 [2024-12-09 05:10:32.556300] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:50.218 [2024-12-09 05:10:32.556305] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.556312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.564215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.564233] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:50.218 [2024-12-09 05:10:32.564247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.564256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.564264] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.218 [2024-12-09 05:10:32.564270] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.218 [2024-12-09 05:10:32.564275] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.564282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.572214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.572230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.572241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.572250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.218 [2024-12-09 05:10:32.572256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.218 [2024-12-09 05:10:32.572260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.572267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.580229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580273] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:50.218 [2024-12-09 05:10:32.580279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:50.218 [2024-12-09 05:10:32.580285] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:50.218 [2024-12-09 05:10:32.580302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.588216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.588232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.596213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.596228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.604214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.604229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.612213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:50.218 [2024-12-09 05:10:32.612232] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:50.218 [2024-12-09 05:10:32.612241] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:50.218 [2024-12-09 05:10:32.612246] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:50.218 [2024-12-09 05:10:32.612250] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:50.218 [2024-12-09 05:10:32.612255] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:50.218 [2024-12-09 05:10:32.612262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:50.218 [2024-12-09 05:10:32.612270] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:50.218 [2024-12-09 05:10:32.612276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:50.218 [2024-12-09 05:10:32.612280] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.612287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.612295] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:50.218 [2024-12-09 05:10:32.612300] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.218 [2024-12-09 05:10:32.612305] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.218 [2024-12-09 05:10:32.612311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.218 [2024-12-09 05:10:32.612320] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:50.218 [2024-12-09 05:10:32.612325] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:50.218 [2024-12-09 05:10:32.612330] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:50.219 [2024-12-09 05:10:32.612336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:50.219 [2024-12-09 05:10:32.620213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:50.219 [2024-12-09 05:10:32.620229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:50.219 [2024-12-09 05:10:32.620242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:50.219 [2024-12-09 05:10:32.620251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:50.219 ===================================================== 00:15:50.219 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.219 ===================================================== 00:15:50.219 Controller Capabilities/Features 00:15:50.219 ================================ 00:15:50.219 Vendor ID: 4e58 00:15:50.219 Subsystem Vendor ID: 4e58 00:15:50.219 Serial Number: SPDK2 00:15:50.219 Model Number: SPDK bdev Controller 00:15:50.219 Firmware Version: 25.01 00:15:50.219 Recommended Arb Burst: 6 00:15:50.219 IEEE OUI Identifier: 8d 6b 50 00:15:50.219 Multi-path I/O 00:15:50.219 May have multiple subsystem ports: Yes 00:15:50.219 May have multiple controllers: Yes 00:15:50.219 Associated with SR-IOV VF: No 00:15:50.219 Max Data Transfer Size: 131072 00:15:50.219 Max Number of Namespaces: 32 00:15:50.219 Max Number of I/O Queues: 127 00:15:50.219 NVMe Specification Version (VS): 1.3 00:15:50.219 NVMe Specification Version (Identify): 1.3 00:15:50.219 Maximum Queue Entries: 256 00:15:50.219 Contiguous Queues Required: Yes 00:15:50.219 Arbitration Mechanisms Supported 00:15:50.219 Weighted Round Robin: Not Supported 00:15:50.219 Vendor Specific: Not Supported 00:15:50.219 Reset Timeout: 15000 ms 00:15:50.219 Doorbell Stride: 4 bytes 00:15:50.219 NVM Subsystem Reset: Not Supported 00:15:50.219 Command Sets Supported 00:15:50.219 NVM Command Set: Supported 00:15:50.219 Boot Partition: Not Supported 00:15:50.219 Memory Page Size Minimum: 4096 bytes 00:15:50.219 Memory Page Size Maximum: 4096 bytes 00:15:50.219 Persistent Memory Region: Not Supported 00:15:50.219 Optional Asynchronous Events Supported 00:15:50.219 Namespace Attribute Notices: Supported 00:15:50.219 Firmware Activation Notices: Not Supported 00:15:50.219 ANA Change Notices: Not Supported 00:15:50.219 PLE Aggregate Log Change Notices: Not Supported 00:15:50.219 LBA Status Info Alert Notices: Not Supported 00:15:50.219 EGE Aggregate Log Change Notices: Not Supported 00:15:50.219 Normal NVM Subsystem Shutdown event: Not Supported 00:15:50.219 Zone Descriptor Change Notices: Not Supported 00:15:50.219 Discovery Log Change Notices: Not Supported 00:15:50.219 Controller Attributes 00:15:50.219 128-bit Host Identifier: Supported 00:15:50.219 Non-Operational Permissive Mode: Not Supported 00:15:50.219 NVM Sets: Not Supported 00:15:50.219 Read Recovery Levels: Not Supported 00:15:50.219 Endurance Groups: Not Supported 00:15:50.219 Predictable Latency Mode: Not Supported 00:15:50.219 Traffic Based Keep ALive: Not Supported 00:15:50.219 Namespace Granularity: Not Supported 00:15:50.219 SQ Associations: Not Supported 00:15:50.219 UUID List: Not Supported 00:15:50.219 Multi-Domain Subsystem: Not Supported 00:15:50.219 Fixed Capacity Management: Not Supported 00:15:50.219 Variable Capacity Management: Not Supported 00:15:50.219 Delete Endurance Group: Not Supported 00:15:50.219 Delete NVM Set: Not Supported 00:15:50.219 Extended LBA Formats Supported: Not Supported 00:15:50.219 Flexible Data Placement Supported: Not Supported 00:15:50.219 00:15:50.219 Controller Memory Buffer Support 00:15:50.219 ================================ 00:15:50.219 Supported: No 00:15:50.219 00:15:50.219 Persistent Memory Region Support 00:15:50.219 ================================ 00:15:50.219 Supported: No 00:15:50.219 00:15:50.219 Admin Command Set Attributes 00:15:50.219 ============================ 00:15:50.219 Security Send/Receive: Not Supported 00:15:50.219 Format NVM: Not Supported 00:15:50.219 Firmware Activate/Download: Not Supported 00:15:50.219 Namespace Management: Not Supported 00:15:50.219 Device Self-Test: Not Supported 00:15:50.219 Directives: Not Supported 00:15:50.219 NVMe-MI: Not Supported 00:15:50.219 Virtualization Management: Not Supported 00:15:50.219 Doorbell Buffer Config: Not Supported 00:15:50.219 Get LBA Status Capability: Not Supported 00:15:50.219 Command & Feature Lockdown Capability: Not Supported 00:15:50.219 Abort Command Limit: 4 00:15:50.219 Async Event Request Limit: 4 00:15:50.219 Number of Firmware Slots: N/A 00:15:50.219 Firmware Slot 1 Read-Only: N/A 00:15:50.219 Firmware Activation Without Reset: N/A 00:15:50.219 Multiple Update Detection Support: N/A 00:15:50.219 Firmware Update Granularity: No Information Provided 00:15:50.219 Per-Namespace SMART Log: No 00:15:50.219 Asymmetric Namespace Access Log Page: Not Supported 00:15:50.219 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:50.219 Command Effects Log Page: Supported 00:15:50.219 Get Log Page Extended Data: Supported 00:15:50.219 Telemetry Log Pages: Not Supported 00:15:50.219 Persistent Event Log Pages: Not Supported 00:15:50.219 Supported Log Pages Log Page: May Support 00:15:50.219 Commands Supported & Effects Log Page: Not Supported 00:15:50.219 Feature Identifiers & Effects Log Page:May Support 00:15:50.219 NVMe-MI Commands & Effects Log Page: May Support 00:15:50.219 Data Area 4 for Telemetry Log: Not Supported 00:15:50.219 Error Log Page Entries Supported: 128 00:15:50.219 Keep Alive: Supported 00:15:50.219 Keep Alive Granularity: 10000 ms 00:15:50.219 00:15:50.219 NVM Command Set Attributes 00:15:50.219 ========================== 00:15:50.219 Submission Queue Entry Size 00:15:50.219 Max: 64 00:15:50.219 Min: 64 00:15:50.219 Completion Queue Entry Size 00:15:50.219 Max: 16 00:15:50.219 Min: 16 00:15:50.219 Number of Namespaces: 32 00:15:50.219 Compare Command: Supported 00:15:50.219 Write Uncorrectable Command: Not Supported 00:15:50.219 Dataset Management Command: Supported 00:15:50.219 Write Zeroes Command: Supported 00:15:50.219 Set Features Save Field: Not Supported 00:15:50.219 Reservations: Not Supported 00:15:50.219 Timestamp: Not Supported 00:15:50.219 Copy: Supported 00:15:50.219 Volatile Write Cache: Present 00:15:50.219 Atomic Write Unit (Normal): 1 00:15:50.219 Atomic Write Unit (PFail): 1 00:15:50.219 Atomic Compare & Write Unit: 1 00:15:50.219 Fused Compare & Write: Supported 00:15:50.219 Scatter-Gather List 00:15:50.219 SGL Command Set: Supported (Dword aligned) 00:15:50.219 SGL Keyed: Not Supported 00:15:50.219 SGL Bit Bucket Descriptor: Not Supported 00:15:50.219 SGL Metadata Pointer: Not Supported 00:15:50.219 Oversized SGL: Not Supported 00:15:50.219 SGL Metadata Address: Not Supported 00:15:50.219 SGL Offset: Not Supported 00:15:50.219 Transport SGL Data Block: Not Supported 00:15:50.219 Replay Protected Memory Block: Not Supported 00:15:50.219 00:15:50.219 Firmware Slot Information 00:15:50.219 ========================= 00:15:50.219 Active slot: 1 00:15:50.219 Slot 1 Firmware Revision: 25.01 00:15:50.219 00:15:50.219 00:15:50.219 Commands Supported and Effects 00:15:50.219 ============================== 00:15:50.219 Admin Commands 00:15:50.219 -------------- 00:15:50.219 Get Log Page (02h): Supported 00:15:50.219 Identify (06h): Supported 00:15:50.219 Abort (08h): Supported 00:15:50.219 Set Features (09h): Supported 00:15:50.219 Get Features (0Ah): Supported 00:15:50.219 Asynchronous Event Request (0Ch): Supported 00:15:50.219 Keep Alive (18h): Supported 00:15:50.219 I/O Commands 00:15:50.219 ------------ 00:15:50.219 Flush (00h): Supported LBA-Change 00:15:50.219 Write (01h): Supported LBA-Change 00:15:50.219 Read (02h): Supported 00:15:50.219 Compare (05h): Supported 00:15:50.219 Write Zeroes (08h): Supported LBA-Change 00:15:50.219 Dataset Management (09h): Supported LBA-Change 00:15:50.219 Copy (19h): Supported LBA-Change 00:15:50.219 00:15:50.219 Error Log 00:15:50.219 ========= 00:15:50.219 00:15:50.219 Arbitration 00:15:50.219 =========== 00:15:50.219 Arbitration Burst: 1 00:15:50.219 00:15:50.219 Power Management 00:15:50.219 ================ 00:15:50.219 Number of Power States: 1 00:15:50.219 Current Power State: Power State #0 00:15:50.219 Power State #0: 00:15:50.219 Max Power: 0.00 W 00:15:50.220 Non-Operational State: Operational 00:15:50.220 Entry Latency: Not Reported 00:15:50.220 Exit Latency: Not Reported 00:15:50.220 Relative Read Throughput: 0 00:15:50.220 Relative Read Latency: 0 00:15:50.220 Relative Write Throughput: 0 00:15:50.220 Relative Write Latency: 0 00:15:50.220 Idle Power: Not Reported 00:15:50.220 Active Power: Not Reported 00:15:50.220 Non-Operational Permissive Mode: Not Supported 00:15:50.220 00:15:50.220 Health Information 00:15:50.220 ================== 00:15:50.220 Critical Warnings: 00:15:50.220 Available Spare Space: OK 00:15:50.220 Temperature: OK 00:15:50.220 Device Reliability: OK 00:15:50.220 Read Only: No 00:15:50.220 Volatile Memory Backup: OK 00:15:50.220 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:50.220 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:50.220 Available Spare: 0% 00:15:50.220 Available Sp[2024-12-09 05:10:32.620346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:50.220 [2024-12-09 05:10:32.628214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:50.220 [2024-12-09 05:10:32.628248] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:50.220 [2024-12-09 05:10:32.628258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.220 [2024-12-09 05:10:32.628266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.220 [2024-12-09 05:10:32.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.220 [2024-12-09 05:10:32.628281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.220 [2024-12-09 05:10:32.628338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:50.220 [2024-12-09 05:10:32.628351] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:50.220 [2024-12-09 05:10:32.629344] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.220 [2024-12-09 05:10:32.629396] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:50.220 [2024-12-09 05:10:32.629407] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:50.220 [2024-12-09 05:10:32.630350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:50.220 [2024-12-09 05:10:32.630365] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:50.220 [2024-12-09 05:10:32.630413] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:50.220 [2024-12-09 05:10:32.631504] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.477 are Threshold: 0% 00:15:50.477 Life Percentage Used: 0% 00:15:50.477 Data Units Read: 0 00:15:50.477 Data Units Written: 0 00:15:50.477 Host Read Commands: 0 00:15:50.477 Host Write Commands: 0 00:15:50.477 Controller Busy Time: 0 minutes 00:15:50.477 Power Cycles: 0 00:15:50.477 Power On Hours: 0 hours 00:15:50.477 Unsafe Shutdowns: 0 00:15:50.477 Unrecoverable Media Errors: 0 00:15:50.477 Lifetime Error Log Entries: 0 00:15:50.477 Warning Temperature Time: 0 minutes 00:15:50.477 Critical Temperature Time: 0 minutes 00:15:50.477 00:15:50.478 Number of Queues 00:15:50.478 ================ 00:15:50.478 Number of I/O Submission Queues: 127 00:15:50.478 Number of I/O Completion Queues: 127 00:15:50.478 00:15:50.478 Active Namespaces 00:15:50.478 ================= 00:15:50.478 Namespace ID:1 00:15:50.478 Error Recovery Timeout: Unlimited 00:15:50.478 Command Set Identifier: NVM (00h) 00:15:50.478 Deallocate: Supported 00:15:50.478 Deallocated/Unwritten Error: Not Supported 00:15:50.478 Deallocated Read Value: Unknown 00:15:50.478 Deallocate in Write Zeroes: Not Supported 00:15:50.478 Deallocated Guard Field: 0xFFFF 00:15:50.478 Flush: Supported 00:15:50.478 Reservation: Supported 00:15:50.478 Namespace Sharing Capabilities: Multiple Controllers 00:15:50.478 Size (in LBAs): 131072 (0GiB) 00:15:50.478 Capacity (in LBAs): 131072 (0GiB) 00:15:50.478 Utilization (in LBAs): 131072 (0GiB) 00:15:50.478 NGUID: 16714CF943DC40CB80C661FE37B90FFE 00:15:50.478 UUID: 16714cf9-43dc-40cb-80c6-61fe37b90ffe 00:15:50.478 Thin Provisioning: Not Supported 00:15:50.478 Per-NS Atomic Units: Yes 00:15:50.478 Atomic Boundary Size (Normal): 0 00:15:50.478 Atomic Boundary Size (PFail): 0 00:15:50.478 Atomic Boundary Offset: 0 00:15:50.478 Maximum Single Source Range Length: 65535 00:15:50.478 Maximum Copy Length: 65535 00:15:50.478 Maximum Source Range Count: 1 00:15:50.478 NGUID/EUI64 Never Reused: No 00:15:50.478 Namespace Write Protected: No 00:15:50.478 Number of LBA Formats: 1 00:15:50.478 Current LBA Format: LBA Format #00 00:15:50.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:50.478 00:15:50.478 05:10:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:50.478 [2024-12-09 05:10:32.931319] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.743 Initializing NVMe Controllers 00:15:55.743 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.743 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:55.743 Initialization complete. Launching workers. 00:15:55.743 ======================================================== 00:15:55.743 Latency(us) 00:15:55.743 Device Information : IOPS MiB/s Average min max 00:15:55.743 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39872.59 155.75 3210.06 955.82 10638.65 00:15:55.743 ======================================================== 00:15:55.743 Total : 39872.59 155.75 3210.06 955.82 10638.65 00:15:55.743 00:15:55.743 [2024-12-09 05:10:38.032490] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.743 05:10:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:56.001 [2024-12-09 05:10:38.352354] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.269 Initializing NVMe Controllers 00:16:01.269 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:01.269 Initialization complete. Launching workers. 00:16:01.269 ======================================================== 00:16:01.269 Latency(us) 00:16:01.269 Device Information : IOPS MiB/s Average min max 00:16:01.269 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39967.38 156.12 3202.89 950.05 8624.98 00:16:01.269 ======================================================== 00:16:01.269 Total : 39967.38 156.12 3202.89 950.05 8624.98 00:16:01.269 00:16:01.269 [2024-12-09 05:10:43.374112] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.269 05:10:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:01.269 [2024-12-09 05:10:43.691536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:06.537 [2024-12-09 05:10:48.839311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:06.537 Initializing NVMe Controllers 00:16:06.537 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:06.537 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:06.537 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:06.537 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:06.537 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:06.537 Initialization complete. Launching workers. 00:16:06.537 Starting thread on core 2 00:16:06.537 Starting thread on core 3 00:16:06.537 Starting thread on core 1 00:16:06.537 05:10:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:06.795 [2024-12-09 05:10:49.244616] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.088 [2024-12-09 05:10:52.331438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.088 Initializing NVMe Controllers 00:16:10.088 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.088 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.088 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:10.088 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:10.088 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:10.088 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:10.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:10.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:10.089 Initialization complete. Launching workers. 00:16:10.089 Starting thread on core 1 with urgent priority queue 00:16:10.089 Starting thread on core 2 with urgent priority queue 00:16:10.089 Starting thread on core 3 with urgent priority queue 00:16:10.089 Starting thread on core 0 with urgent priority queue 00:16:10.089 SPDK bdev Controller (SPDK2 ) core 0: 9296.00 IO/s 10.76 secs/100000 ios 00:16:10.089 SPDK bdev Controller (SPDK2 ) core 1: 7370.33 IO/s 13.57 secs/100000 ios 00:16:10.089 SPDK bdev Controller (SPDK2 ) core 2: 7387.67 IO/s 13.54 secs/100000 ios 00:16:10.089 SPDK bdev Controller (SPDK2 ) core 3: 9158.33 IO/s 10.92 secs/100000 ios 00:16:10.089 ======================================================== 00:16:10.089 00:16:10.089 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:10.348 [2024-12-09 05:10:52.726709] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.348 Initializing NVMe Controllers 00:16:10.348 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.348 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.348 Namespace ID: 1 size: 0GB 00:16:10.348 Initialization complete. 00:16:10.348 INFO: using host memory buffer for IO 00:16:10.348 Hello world! 00:16:10.348 [2024-12-09 05:10:52.734767] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.607 05:10:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:10.866 [2024-12-09 05:10:53.119922] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:11.804 Initializing NVMe Controllers 00:16:11.804 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:11.805 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:11.805 Initialization complete. Launching workers. 00:16:11.805 submit (in ns) avg, min, max = 7818.9, 3054.4, 4000152.0 00:16:11.805 complete (in ns) avg, min, max = 20337.0, 1723.2, 7988933.6 00:16:11.805 00:16:11.805 Submit histogram 00:16:11.805 ================ 00:16:11.805 Range in us Cumulative Count 00:16:11.805 3.046 - 3.059: 0.0178% ( 3) 00:16:11.805 3.059 - 3.072: 0.0415% ( 4) 00:16:11.805 3.072 - 3.085: 0.0831% ( 7) 00:16:11.805 3.085 - 3.098: 0.1840% ( 17) 00:16:11.805 3.098 - 3.110: 0.4748% ( 49) 00:16:11.805 3.110 - 3.123: 0.8783% ( 68) 00:16:11.805 3.123 - 3.136: 1.6914% ( 137) 00:16:11.805 3.136 - 3.149: 3.1810% ( 251) 00:16:11.805 3.149 - 3.162: 5.6142% ( 410) 00:16:11.805 3.162 - 3.174: 9.7270% ( 693) 00:16:11.805 3.174 - 3.187: 14.0534% ( 729) 00:16:11.805 3.187 - 3.200: 19.4303% ( 906) 00:16:11.805 3.200 - 3.213: 25.3531% ( 998) 00:16:11.805 3.213 - 3.226: 31.2166% ( 988) 00:16:11.805 3.226 - 3.238: 37.2344% ( 1014) 00:16:11.805 3.238 - 3.251: 43.2878% ( 1020) 00:16:11.805 3.251 - 3.264: 49.5074% ( 1048) 00:16:11.805 3.264 - 3.277: 55.4184% ( 996) 00:16:11.805 3.277 - 3.302: 61.3353% ( 997) 00:16:11.805 3.302 - 3.328: 67.1157% ( 974) 00:16:11.805 3.328 - 3.354: 72.3798% ( 887) 00:16:11.805 3.354 - 3.379: 77.4481% ( 854) 00:16:11.805 3.379 - 3.405: 85.5786% ( 1370) 00:16:11.805 3.405 - 3.430: 87.8279% ( 379) 00:16:11.805 3.430 - 3.456: 88.4570% ( 106) 00:16:11.805 3.456 - 3.482: 89.0564% ( 101) 00:16:11.805 3.482 - 3.507: 90.1899% ( 191) 00:16:11.805 3.507 - 3.533: 91.5727% ( 233) 00:16:11.805 3.533 - 3.558: 93.2997% ( 291) 00:16:11.805 3.558 - 3.584: 94.8783% ( 266) 00:16:11.805 3.584 - 3.610: 95.9407% ( 179) 00:16:11.805 3.610 - 3.635: 97.0861% ( 193) 00:16:11.805 3.635 - 3.661: 98.2018% ( 188) 00:16:11.805 3.661 - 3.686: 98.7893% ( 99) 00:16:11.805 3.686 - 3.712: 99.0504% ( 44) 00:16:11.805 3.712 - 3.738: 99.3056% ( 43) 00:16:11.805 3.738 - 3.763: 99.4540% ( 25) 00:16:11.805 3.763 - 3.789: 99.5134% ( 10) 00:16:11.805 3.789 - 3.814: 99.5430% ( 5) 00:16:11.805 3.814 - 3.840: 99.5490% ( 1) 00:16:11.805 3.840 - 3.866: 99.5549% ( 1) 00:16:11.805 3.917 - 3.942: 99.5608% ( 1) 00:16:11.805 3.968 - 3.994: 99.5668% ( 1) 00:16:11.805 5.632 - 5.658: 99.5727% ( 1) 00:16:11.805 5.760 - 5.786: 99.5786% ( 1) 00:16:11.805 5.837 - 5.862: 99.5846% ( 1) 00:16:11.805 6.093 - 6.118: 99.5905% ( 1) 00:16:11.805 6.298 - 6.323: 99.5964% ( 1) 00:16:11.805 6.323 - 6.349: 99.6024% ( 1) 00:16:11.805 6.374 - 6.400: 99.6083% ( 1) 00:16:11.805 6.400 - 6.426: 99.6142% ( 1) 00:16:11.805 6.426 - 6.451: 99.6202% ( 1) 00:16:11.805 6.605 - 6.656: 99.6320% ( 2) 00:16:11.805 6.656 - 6.707: 99.6439% ( 2) 00:16:11.805 6.912 - 6.963: 99.6499% ( 1) 00:16:11.805 7.014 - 7.066: 99.6617% ( 2) 00:16:11.805 7.066 - 7.117: 99.6855% ( 4) 00:16:11.805 7.117 - 7.168: 99.6914% ( 1) 00:16:11.805 7.168 - 7.219: 99.6973% ( 1) 00:16:11.805 7.219 - 7.270: 99.7092% ( 2) 00:16:11.805 7.373 - 7.424: 99.7270% ( 3) 00:16:11.805 7.424 - 7.475: 99.7329% ( 1) 00:16:11.805 7.475 - 7.526: 99.7567% ( 4) 00:16:11.805 7.578 - 7.629: 99.7626% ( 1) 00:16:11.805 7.629 - 7.680: 99.7864% ( 4) 00:16:11.805 7.680 - 7.731: 99.7982% ( 2) 00:16:11.805 7.731 - 7.782: 99.8160% ( 3) 00:16:11.805 7.782 - 7.834: 99.8220% ( 1) 00:16:11.805 7.834 - 7.885: 99.8398% ( 3) 00:16:11.805 7.885 - 7.936: 99.8457% ( 1) 00:16:11.805 7.936 - 7.987: 99.8576% ( 2) 00:16:11.805 7.987 - 8.038: 99.8635% ( 1) 00:16:11.805 8.038 - 8.090: 99.8694% ( 1) 00:16:11.805 8.141 - 8.192: 99.8754% ( 1) 00:16:11.805 9.318 - 9.370: 99.8813% ( 1) 00:16:11.805 18.944 - 19.046: 99.8872% ( 1) 00:16:11.805 3984.589 - 4010.803: 100.0000% ( 19) 00:16:11.805 00:16:11.805 Complete histogram 00:16:11.805 ================== 00:16:11.805 Range in us Cumulative Count 00:16:11.805 1.715 - [2024-12-09 05:10:54.218105] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:11.805 1.728: 0.0059% ( 1) 00:16:11.805 1.741 - 1.754: 0.0119% ( 1) 00:16:11.805 1.754 - 1.766: 0.0297% ( 3) 00:16:11.805 1.766 - 1.779: 0.1602% ( 22) 00:16:11.805 1.779 - 1.792: 0.9318% ( 130) 00:16:11.805 1.792 - 1.805: 2.0475% ( 188) 00:16:11.805 1.805 - 1.818: 3.0326% ( 166) 00:16:11.805 1.818 - 1.830: 13.2463% ( 1721) 00:16:11.805 1.830 - 1.843: 55.0920% ( 7051) 00:16:11.805 1.843 - 1.856: 87.9763% ( 5541) 00:16:11.805 1.856 - 1.869: 96.4036% ( 1420) 00:16:11.805 1.869 - 1.882: 98.6231% ( 374) 00:16:11.805 1.882 - 1.894: 99.0742% ( 76) 00:16:11.805 1.894 - 1.907: 99.1929% ( 20) 00:16:11.805 1.907 - 1.920: 99.2226% ( 5) 00:16:11.805 1.920 - 1.933: 99.2404% ( 3) 00:16:11.805 1.933 - 1.946: 99.2522% ( 2) 00:16:11.805 1.946 - 1.958: 99.2582% ( 1) 00:16:11.805 1.958 - 1.971: 99.2700% ( 2) 00:16:11.805 1.971 - 1.984: 99.2819% ( 2) 00:16:11.805 2.010 - 2.022: 99.2938% ( 2) 00:16:11.805 2.074 - 2.086: 99.2997% ( 1) 00:16:11.805 2.112 - 2.125: 99.3116% ( 2) 00:16:11.805 2.138 - 2.150: 99.3175% ( 1) 00:16:11.805 2.189 - 2.202: 99.3234% ( 1) 00:16:11.805 2.214 - 2.227: 99.3294% ( 1) 00:16:11.805 2.227 - 2.240: 99.3353% ( 1) 00:16:11.805 2.253 - 2.266: 99.3412% ( 1) 00:16:11.805 4.378 - 4.403: 99.3472% ( 1) 00:16:11.805 4.480 - 4.506: 99.3531% ( 1) 00:16:11.805 4.557 - 4.582: 99.3591% ( 1) 00:16:11.805 4.582 - 4.608: 99.3650% ( 1) 00:16:11.805 5.197 - 5.222: 99.3709% ( 1) 00:16:11.805 5.376 - 5.402: 99.3769% ( 1) 00:16:11.805 5.504 - 5.530: 99.3887% ( 2) 00:16:11.805 5.555 - 5.581: 99.4006% ( 2) 00:16:11.805 5.632 - 5.658: 99.4065% ( 1) 00:16:11.805 5.683 - 5.709: 99.4125% ( 1) 00:16:11.805 5.734 - 5.760: 99.4243% ( 2) 00:16:11.805 5.811 - 5.837: 99.4303% ( 1) 00:16:11.805 5.862 - 5.888: 99.4362% ( 1) 00:16:11.805 6.067 - 6.093: 99.4421% ( 1) 00:16:11.805 6.093 - 6.118: 99.4481% ( 1) 00:16:11.805 6.272 - 6.298: 99.4540% ( 1) 00:16:11.805 6.374 - 6.400: 99.4599% ( 1) 00:16:11.805 6.810 - 6.861: 99.4659% ( 1) 00:16:11.805 6.912 - 6.963: 99.4718% ( 1) 00:16:11.805 6.963 - 7.014: 99.4777% ( 1) 00:16:11.805 7.066 - 7.117: 99.4837% ( 1) 00:16:11.805 7.322 - 7.373: 99.4896% ( 1) 00:16:11.805 7.424 - 7.475: 99.4955% ( 1) 00:16:11.805 8.704 - 8.755: 99.5015% ( 1) 00:16:11.805 12.237 - 12.288: 99.5074% ( 1) 00:16:11.805 12.954 - 13.005: 99.5134% ( 1) 00:16:11.805 13.312 - 13.414: 99.5193% ( 1) 00:16:11.805 14.234 - 14.336: 99.5252% ( 1) 00:16:11.805 14.336 - 14.438: 99.5312% ( 1) 00:16:11.805 15.258 - 15.360: 99.5371% ( 1) 00:16:11.805 1808.794 - 1821.901: 99.5430% ( 1) 00:16:11.805 1992.294 - 2005.402: 99.5490% ( 1) 00:16:11.805 3984.589 - 4010.803: 99.9941% ( 75) 00:16:11.805 7969.178 - 8021.606: 100.0000% ( 1) 00:16:11.805 00:16:11.805 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:11.805 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:11.805 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:11.805 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:11.805 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.065 [ 00:16:12.065 { 00:16:12.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.065 "subtype": "Discovery", 00:16:12.065 "listen_addresses": [], 00:16:12.065 "allow_any_host": true, 00:16:12.065 "hosts": [] 00:16:12.065 }, 00:16:12.065 { 00:16:12.065 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.065 "subtype": "NVMe", 00:16:12.065 "listen_addresses": [ 00:16:12.065 { 00:16:12.065 "trtype": "VFIOUSER", 00:16:12.065 "adrfam": "IPv4", 00:16:12.065 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.065 "trsvcid": "0" 00:16:12.065 } 00:16:12.065 ], 00:16:12.065 "allow_any_host": true, 00:16:12.065 "hosts": [], 00:16:12.065 "serial_number": "SPDK1", 00:16:12.065 "model_number": "SPDK bdev Controller", 00:16:12.065 "max_namespaces": 32, 00:16:12.065 "min_cntlid": 1, 00:16:12.065 "max_cntlid": 65519, 00:16:12.065 "namespaces": [ 00:16:12.065 { 00:16:12.065 "nsid": 1, 00:16:12.065 "bdev_name": "Malloc1", 00:16:12.065 "name": "Malloc1", 00:16:12.065 "nguid": "E49841F702424679AE41549CF3071135", 00:16:12.065 "uuid": "e49841f7-0242-4679-ae41-549cf3071135" 00:16:12.065 }, 00:16:12.065 { 00:16:12.065 "nsid": 2, 00:16:12.065 "bdev_name": "Malloc3", 00:16:12.065 "name": "Malloc3", 00:16:12.065 "nguid": "4F05F21F878C4421B9FD884BE833B445", 00:16:12.065 "uuid": "4f05f21f-878c-4421-b9fd-884be833b445" 00:16:12.065 } 00:16:12.065 ] 00:16:12.065 }, 00:16:12.065 { 00:16:12.065 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.065 "subtype": "NVMe", 00:16:12.065 "listen_addresses": [ 00:16:12.065 { 00:16:12.065 "trtype": "VFIOUSER", 00:16:12.065 "adrfam": "IPv4", 00:16:12.065 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.065 "trsvcid": "0" 00:16:12.065 } 00:16:12.065 ], 00:16:12.065 "allow_any_host": true, 00:16:12.065 "hosts": [], 00:16:12.065 "serial_number": "SPDK2", 00:16:12.065 "model_number": "SPDK bdev Controller", 00:16:12.065 "max_namespaces": 32, 00:16:12.065 "min_cntlid": 1, 00:16:12.065 "max_cntlid": 65519, 00:16:12.065 "namespaces": [ 00:16:12.065 { 00:16:12.065 "nsid": 1, 00:16:12.065 "bdev_name": "Malloc2", 00:16:12.065 "name": "Malloc2", 00:16:12.065 "nguid": "16714CF943DC40CB80C661FE37B90FFE", 00:16:12.065 "uuid": "16714cf9-43dc-40cb-80c6-61fe37b90ffe" 00:16:12.065 } 00:16:12.065 ] 00:16:12.065 } 00:16:12.065 ] 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=454996 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:16:12.065 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:12.325 [2024-12-09 05:10:54.638638] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:12.325 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:12.584 Malloc4 00:16:12.584 05:10:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:12.842 [2024-12-09 05:10:55.058899] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.842 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.842 Asynchronous Event Request test 00:16:12.842 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.842 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.842 Registering asynchronous event callbacks... 00:16:12.842 Starting namespace attribute notice tests for all controllers... 00:16:12.842 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:12.842 aer_cb - Changed Namespace 00:16:12.842 Cleaning up... 00:16:12.842 [ 00:16:12.842 { 00:16:12.842 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.842 "subtype": "Discovery", 00:16:12.842 "listen_addresses": [], 00:16:12.842 "allow_any_host": true, 00:16:12.842 "hosts": [] 00:16:12.842 }, 00:16:12.842 { 00:16:12.842 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.842 "subtype": "NVMe", 00:16:12.842 "listen_addresses": [ 00:16:12.842 { 00:16:12.842 "trtype": "VFIOUSER", 00:16:12.842 "adrfam": "IPv4", 00:16:12.842 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.842 "trsvcid": "0" 00:16:12.842 } 00:16:12.842 ], 00:16:12.842 "allow_any_host": true, 00:16:12.842 "hosts": [], 00:16:12.842 "serial_number": "SPDK1", 00:16:12.842 "model_number": "SPDK bdev Controller", 00:16:12.842 "max_namespaces": 32, 00:16:12.842 "min_cntlid": 1, 00:16:12.842 "max_cntlid": 65519, 00:16:12.842 "namespaces": [ 00:16:12.842 { 00:16:12.842 "nsid": 1, 00:16:12.842 "bdev_name": "Malloc1", 00:16:12.842 "name": "Malloc1", 00:16:12.842 "nguid": "E49841F702424679AE41549CF3071135", 00:16:12.842 "uuid": "e49841f7-0242-4679-ae41-549cf3071135" 00:16:12.842 }, 00:16:12.842 { 00:16:12.842 "nsid": 2, 00:16:12.842 "bdev_name": "Malloc3", 00:16:12.842 "name": "Malloc3", 00:16:12.842 "nguid": "4F05F21F878C4421B9FD884BE833B445", 00:16:12.842 "uuid": "4f05f21f-878c-4421-b9fd-884be833b445" 00:16:12.842 } 00:16:12.842 ] 00:16:12.842 }, 00:16:12.842 { 00:16:12.842 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.842 "subtype": "NVMe", 00:16:12.842 "listen_addresses": [ 00:16:12.842 { 00:16:12.842 "trtype": "VFIOUSER", 00:16:12.842 "adrfam": "IPv4", 00:16:12.842 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.842 "trsvcid": "0" 00:16:12.842 } 00:16:12.842 ], 00:16:12.842 "allow_any_host": true, 00:16:12.842 "hosts": [], 00:16:12.842 "serial_number": "SPDK2", 00:16:12.842 "model_number": "SPDK bdev Controller", 00:16:12.842 "max_namespaces": 32, 00:16:12.842 "min_cntlid": 1, 00:16:12.842 "max_cntlid": 65519, 00:16:12.842 "namespaces": [ 00:16:12.842 { 00:16:12.842 "nsid": 1, 00:16:12.842 "bdev_name": "Malloc2", 00:16:12.842 "name": "Malloc2", 00:16:12.842 "nguid": "16714CF943DC40CB80C661FE37B90FFE", 00:16:12.842 "uuid": "16714cf9-43dc-40cb-80c6-61fe37b90ffe" 00:16:12.842 }, 00:16:12.842 { 00:16:12.842 "nsid": 2, 00:16:12.842 "bdev_name": "Malloc4", 00:16:12.842 "name": "Malloc4", 00:16:12.842 "nguid": "EE670FBB6BBA469DA8560D93766F9024", 00:16:12.842 "uuid": "ee670fbb-6bba-469d-a856-0d93766f9024" 00:16:12.842 } 00:16:12.842 ] 00:16:12.842 } 00:16:12.842 ] 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 454996 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 446698 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 446698 ']' 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 446698 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.843 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 446698 00:16:13.101 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.101 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.101 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 446698' 00:16:13.101 killing process with pid 446698 00:16:13.101 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 446698 00:16:13.101 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 446698 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=455264 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 455264' 00:16:13.360 Process pid: 455264 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 455264 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 455264 ']' 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.360 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:13.360 [2024-12-09 05:10:55.693657] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:13.360 [2024-12-09 05:10:55.694568] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:16:13.360 [2024-12-09 05:10:55.694608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.360 [2024-12-09 05:10:55.783129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.360 [2024-12-09 05:10:55.825620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.360 [2024-12-09 05:10:55.825655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.360 [2024-12-09 05:10:55.825670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.360 [2024-12-09 05:10:55.825680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.360 [2024-12-09 05:10:55.825690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.361 [2024-12-09 05:10:55.827411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.361 [2024-12-09 05:10:55.827532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.361 [2024-12-09 05:10:55.827641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.361 [2024-12-09 05:10:55.827642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.619 [2024-12-09 05:10:55.897757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:13.619 [2024-12-09 05:10:55.897916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:13.619 [2024-12-09 05:10:55.898577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:13.619 [2024-12-09 05:10:55.898800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:13.619 [2024-12-09 05:10:55.898848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:13.619 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.619 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:13.619 05:10:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:14.554 05:10:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:14.812 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:14.813 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:14.813 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.813 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:14.813 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:15.071 Malloc1 00:16:15.071 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:15.330 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:15.330 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:15.589 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:15.589 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:15.589 05:10:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:15.848 Malloc2 00:16:15.848 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:16.106 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:16.364 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:16.364 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:16.364 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 455264 00:16:16.364 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 455264 ']' 00:16:16.365 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 455264 00:16:16.365 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:16.365 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.365 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455264 00:16:16.623 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.623 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.624 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455264' 00:16:16.624 killing process with pid 455264 00:16:16.624 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 455264 00:16:16.624 05:10:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 455264 00:16:16.624 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:16.624 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:16.624 00:16:16.624 real 0m53.226s 00:16:16.624 user 3m25.389s 00:16:16.624 sys 0m3.881s 00:16:16.624 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.624 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:16.624 ************************************ 00:16:16.624 END TEST nvmf_vfio_user 00:16:16.624 ************************************ 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.883 ************************************ 00:16:16.883 START TEST nvmf_vfio_user_nvme_compliance 00:16:16.883 ************************************ 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:16.883 * Looking for test storage... 00:16:16.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.883 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.884 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:17.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.144 --rc genhtml_branch_coverage=1 00:16:17.144 --rc genhtml_function_coverage=1 00:16:17.144 --rc genhtml_legend=1 00:16:17.144 --rc geninfo_all_blocks=1 00:16:17.144 --rc geninfo_unexecuted_blocks=1 00:16:17.144 00:16:17.144 ' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:17.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.144 --rc genhtml_branch_coverage=1 00:16:17.144 --rc genhtml_function_coverage=1 00:16:17.144 --rc genhtml_legend=1 00:16:17.144 --rc geninfo_all_blocks=1 00:16:17.144 --rc geninfo_unexecuted_blocks=1 00:16:17.144 00:16:17.144 ' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:17.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.144 --rc genhtml_branch_coverage=1 00:16:17.144 --rc genhtml_function_coverage=1 00:16:17.144 --rc genhtml_legend=1 00:16:17.144 --rc geninfo_all_blocks=1 00:16:17.144 --rc geninfo_unexecuted_blocks=1 00:16:17.144 00:16:17.144 ' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:17.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.144 --rc genhtml_branch_coverage=1 00:16:17.144 --rc genhtml_function_coverage=1 00:16:17.144 --rc genhtml_legend=1 00:16:17.144 --rc geninfo_all_blocks=1 00:16:17.144 --rc geninfo_unexecuted_blocks=1 00:16:17.144 00:16:17.144 ' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.144 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=455891 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 455891' 00:16:17.145 Process pid: 455891 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 455891 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 455891 ']' 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.145 05:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.145 [2024-12-09 05:10:59.449905] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:16:17.145 [2024-12-09 05:10:59.449953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.145 [2024-12-09 05:10:59.543370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.145 [2024-12-09 05:10:59.583230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.145 [2024-12-09 05:10:59.583269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.145 [2024-12-09 05:10:59.583283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.145 [2024-12-09 05:10:59.583293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.145 [2024-12-09 05:10:59.583303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.145 [2024-12-09 05:10:59.584762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.145 [2024-12-09 05:10:59.584873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.145 [2024-12-09 05:10:59.584875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.081 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.081 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:18.081 05:11:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 malloc0 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.020 05:11:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:19.279 00:16:19.279 00:16:19.279 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.279 http://cunit.sourceforge.net/ 00:16:19.279 00:16:19.279 00:16:19.279 Suite: nvme_compliance 00:16:19.279 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 05:11:01.566691] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.279 [2024-12-09 05:11:01.568071] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:19.279 [2024-12-09 05:11:01.568088] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:19.279 [2024-12-09 05:11:01.568096] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:19.279 [2024-12-09 05:11:01.569717] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.279 passed 00:16:19.279 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 05:11:01.653273] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.280 [2024-12-09 05:11:01.656294] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.280 passed 00:16:19.280 Test: admin_identify_ns ...[2024-12-09 05:11:01.741344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.538 [2024-12-09 05:11:01.802224] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:19.538 [2024-12-09 05:11:01.810220] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:19.538 [2024-12-09 05:11:01.831319] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.538 passed 00:16:19.538 Test: admin_get_features_mandatory_features ...[2024-12-09 05:11:01.914642] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.538 [2024-12-09 05:11:01.917662] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.538 passed 00:16:19.538 Test: admin_get_features_optional_features ...[2024-12-09 05:11:02.000175] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.538 [2024-12-09 05:11:02.003192] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.796 passed 00:16:19.796 Test: admin_set_features_number_of_queues ...[2024-12-09 05:11:02.083818] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.796 [2024-12-09 05:11:02.192304] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.796 passed 00:16:20.055 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 05:11:02.270031] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.055 [2024-12-09 05:11:02.276065] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.055 passed 00:16:20.055 Test: admin_get_log_page_with_lpo ...[2024-12-09 05:11:02.354734] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.055 [2024-12-09 05:11:02.426220] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:20.055 [2024-12-09 05:11:02.438271] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.055 passed 00:16:20.055 Test: fabric_property_get ...[2024-12-09 05:11:02.519562] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.055 [2024-12-09 05:11:02.520810] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:20.055 [2024-12-09 05:11:02.522580] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.314 passed 00:16:20.314 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 05:11:02.603096] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.315 [2024-12-09 05:11:02.604380] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:20.315 [2024-12-09 05:11:02.609145] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.315 passed 00:16:20.315 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 05:11:02.689812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.315 [2024-12-09 05:11:02.773217] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.574 [2024-12-09 05:11:02.789217] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.574 [2024-12-09 05:11:02.794306] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.574 passed 00:16:20.574 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 05:11:02.875914] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.574 [2024-12-09 05:11:02.877159] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:20.574 [2024-12-09 05:11:02.878931] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.574 passed 00:16:20.574 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 05:11:02.957319] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.574 [2024-12-09 05:11:03.034225] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:20.833 [2024-12-09 05:11:03.058217] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.833 [2024-12-09 05:11:03.063308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.833 passed 00:16:20.833 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 05:11:03.145729] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.833 [2024-12-09 05:11:03.147005] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:20.833 [2024-12-09 05:11:03.147033] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:20.833 [2024-12-09 05:11:03.148751] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.833 passed 00:16:20.833 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 05:11:03.229418] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.092 [2024-12-09 05:11:03.322218] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:21.092 [2024-12-09 05:11:03.330213] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:21.092 [2024-12-09 05:11:03.338214] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:21.092 [2024-12-09 05:11:03.346219] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:21.092 [2024-12-09 05:11:03.375292] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.092 passed 00:16:21.092 Test: admin_create_io_sq_verify_pc ...[2024-12-09 05:11:03.459052] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.092 [2024-12-09 05:11:03.482229] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:21.092 [2024-12-09 05:11:03.500284] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.092 passed 00:16:21.351 Test: admin_create_io_qp_max_qps ...[2024-12-09 05:11:03.579780] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.289 [2024-12-09 05:11:04.676219] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:22.868 [2024-12-09 05:11:05.071607] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.868 passed 00:16:22.868 Test: admin_create_io_sq_shared_cq ...[2024-12-09 05:11:05.150372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.868 [2024-12-09 05:11:05.284215] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:22.868 [2024-12-09 05:11:05.321273] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.127 passed 00:16:23.127 00:16:23.127 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.127 suites 1 1 n/a 0 0 00:16:23.127 tests 18 18 18 0 0 00:16:23.128 asserts 360 360 360 0 n/a 00:16:23.128 00:16:23.128 Elapsed time = 1.549 seconds 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 455891 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 455891 ']' 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 455891 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455891 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455891' 00:16:23.128 killing process with pid 455891 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 455891 00:16:23.128 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 455891 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:23.387 00:16:23.387 real 0m6.489s 00:16:23.387 user 0m18.162s 00:16:23.387 sys 0m0.729s 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.387 ************************************ 00:16:23.387 END TEST nvmf_vfio_user_nvme_compliance 00:16:23.387 ************************************ 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.387 ************************************ 00:16:23.387 START TEST nvmf_vfio_user_fuzz 00:16:23.387 ************************************ 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:23.387 * Looking for test storage... 00:16:23.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:23.387 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.647 --rc genhtml_branch_coverage=1 00:16:23.647 --rc genhtml_function_coverage=1 00:16:23.647 --rc genhtml_legend=1 00:16:23.647 --rc geninfo_all_blocks=1 00:16:23.647 --rc geninfo_unexecuted_blocks=1 00:16:23.647 00:16:23.647 ' 00:16:23.647 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:23.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.647 --rc genhtml_branch_coverage=1 00:16:23.648 --rc genhtml_function_coverage=1 00:16:23.648 --rc genhtml_legend=1 00:16:23.648 --rc geninfo_all_blocks=1 00:16:23.648 --rc geninfo_unexecuted_blocks=1 00:16:23.648 00:16:23.648 ' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.648 --rc genhtml_branch_coverage=1 00:16:23.648 --rc genhtml_function_coverage=1 00:16:23.648 --rc genhtml_legend=1 00:16:23.648 --rc geninfo_all_blocks=1 00:16:23.648 --rc geninfo_unexecuted_blocks=1 00:16:23.648 00:16:23.648 ' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:23.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.648 --rc genhtml_branch_coverage=1 00:16:23.648 --rc genhtml_function_coverage=1 00:16:23.648 --rc genhtml_legend=1 00:16:23.648 --rc geninfo_all_blocks=1 00:16:23.648 --rc geninfo_unexecuted_blocks=1 00:16:23.648 00:16:23.648 ' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=457039 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 457039' 00:16:23.648 Process pid: 457039 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 457039 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 457039 ']' 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.648 05:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.582 05:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.582 05:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:24.582 05:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 malloc0 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:25.530 05:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:57.619 Fuzzing completed. Shutting down the fuzz application 00:16:57.619 00:16:57.619 Dumping successful admin opcodes: 00:16:57.619 9, 10, 00:16:57.619 Dumping successful io opcodes: 00:16:57.619 0, 00:16:57.619 NS: 0x20000081ef00 I/O qp, Total commands completed: 850840, total successful commands: 3304, random_seed: 3363133504 00:16:57.619 NS: 0x20000081ef00 admin qp, Total commands completed: 166928, total successful commands: 40, random_seed: 183429504 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 457039 ']' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457039' 00:16:57.619 killing process with pid 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 457039 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:57.619 00:16:57.619 real 0m32.964s 00:16:57.619 user 0m32.820s 00:16:57.619 sys 0m27.885s 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.619 ************************************ 00:16:57.619 END TEST nvmf_vfio_user_fuzz 00:16:57.619 ************************************ 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.619 ************************************ 00:16:57.619 START TEST nvmf_auth_target 00:16:57.619 ************************************ 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.619 * Looking for test storage... 00:16:57.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.619 --rc genhtml_branch_coverage=1 00:16:57.619 --rc genhtml_function_coverage=1 00:16:57.619 --rc genhtml_legend=1 00:16:57.619 --rc geninfo_all_blocks=1 00:16:57.619 --rc geninfo_unexecuted_blocks=1 00:16:57.619 00:16:57.619 ' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.619 --rc genhtml_branch_coverage=1 00:16:57.619 --rc genhtml_function_coverage=1 00:16:57.619 --rc genhtml_legend=1 00:16:57.619 --rc geninfo_all_blocks=1 00:16:57.619 --rc geninfo_unexecuted_blocks=1 00:16:57.619 00:16:57.619 ' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.619 --rc genhtml_branch_coverage=1 00:16:57.619 --rc genhtml_function_coverage=1 00:16:57.619 --rc genhtml_legend=1 00:16:57.619 --rc geninfo_all_blocks=1 00:16:57.619 --rc geninfo_unexecuted_blocks=1 00:16:57.619 00:16:57.619 ' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.619 --rc genhtml_branch_coverage=1 00:16:57.619 --rc genhtml_function_coverage=1 00:16:57.619 --rc genhtml_legend=1 00:16:57.619 --rc geninfo_all_blocks=1 00:16:57.619 --rc geninfo_unexecuted_blocks=1 00:16:57.619 00:16:57.619 ' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.619 05:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.619 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.619 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.619 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.620 05:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:04.193 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.194 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.194 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.194 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:04.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:04.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:04.194 Found net devices under 0000:af:00.0: cvl_0_0 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:04.194 Found net devices under 0000:af:00.1: cvl_0_1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:04.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:17:04.194 00:17:04.194 --- 10.0.0.2 ping statistics --- 00:17:04.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.194 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:17:04.194 00:17:04.194 --- 10.0.0.1 ping statistics --- 00:17:04.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.194 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.194 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=465982 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 465982 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 465982 ']' 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.195 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=466261 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ed328bb55905cae4445e03bb332d3177259f4290b43a2d55 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pOy 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ed328bb55905cae4445e03bb332d3177259f4290b43a2d55 0 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ed328bb55905cae4445e03bb332d3177259f4290b43a2d55 0 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.132 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ed328bb55905cae4445e03bb332d3177259f4290b43a2d55 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pOy 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pOy 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pOy 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e683b85dc1a1ab5b8fc04dc1474c131046b7632eed80c2f068ac8701a6f18b92 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.njS 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e683b85dc1a1ab5b8fc04dc1474c131046b7632eed80c2f068ac8701a6f18b92 3 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e683b85dc1a1ab5b8fc04dc1474c131046b7632eed80c2f068ac8701a6f18b92 3 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e683b85dc1a1ab5b8fc04dc1474c131046b7632eed80c2f068ac8701a6f18b92 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.njS 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.njS 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.njS 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2aa80b7a86ec06b4b3ac1b52e16923b8 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GNZ 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2aa80b7a86ec06b4b3ac1b52e16923b8 1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2aa80b7a86ec06b4b3ac1b52e16923b8 1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2aa80b7a86ec06b4b3ac1b52e16923b8 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GNZ 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GNZ 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.GNZ 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c1c5c781b82f0d0a583114ed405a5330dad90a065689eee 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fZC 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c1c5c781b82f0d0a583114ed405a5330dad90a065689eee 2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c1c5c781b82f0d0a583114ed405a5330dad90a065689eee 2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c1c5c781b82f0d0a583114ed405a5330dad90a065689eee 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fZC 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fZC 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.fZC 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43adef011cc842e23babfac35195f9897b26f1a1fd4d55ea 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lbO 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43adef011cc842e23babfac35195f9897b26f1a1fd4d55ea 2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43adef011cc842e23babfac35195f9897b26f1a1fd4d55ea 2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43adef011cc842e23babfac35195f9897b26f1a1fd4d55ea 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.133 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lbO 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lbO 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lbO 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0cf8b07b961c6d60db8d87ae926312b1 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vxO 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0cf8b07b961c6d60db8d87ae926312b1 1 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0cf8b07b961c6d60db8d87ae926312b1 1 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.393 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0cf8b07b961c6d60db8d87ae926312b1 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vxO 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vxO 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.vxO 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b3f7e515cd20c6b9a5102dddee71ccf04a2a7d7ee73b510fc864cfbecca61520 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dqj 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b3f7e515cd20c6b9a5102dddee71ccf04a2a7d7ee73b510fc864cfbecca61520 3 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b3f7e515cd20c6b9a5102dddee71ccf04a2a7d7ee73b510fc864cfbecca61520 3 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b3f7e515cd20c6b9a5102dddee71ccf04a2a7d7ee73b510fc864cfbecca61520 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dqj 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dqj 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dqj 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 465982 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 465982 ']' 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.394 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 466261 /var/tmp/host.sock 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 466261 ']' 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:05.653 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.654 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:05.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:05.654 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.654 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pOy 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pOy 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pOy 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.njS ]] 00:17:05.914 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.njS 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.njS 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.njS 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GNZ 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GNZ 00:17:06.174 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GNZ 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.fZC ]] 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fZC 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fZC 00:17:06.433 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fZC 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lbO 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lbO 00:17:06.693 05:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lbO 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.vxO ]] 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vxO 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vxO 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vxO 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dqj 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dqj 00:17:06.953 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dqj 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.212 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.472 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.735 00:17:07.735 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.735 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.735 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.735 { 00:17:07.735 "cntlid": 1, 00:17:07.735 "qid": 0, 00:17:07.735 "state": "enabled", 00:17:07.735 "thread": "nvmf_tgt_poll_group_000", 00:17:07.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:07.735 "listen_address": { 00:17:07.735 "trtype": "TCP", 00:17:07.735 "adrfam": "IPv4", 00:17:07.735 "traddr": "10.0.0.2", 00:17:07.735 "trsvcid": "4420" 00:17:07.735 }, 00:17:07.735 "peer_address": { 00:17:07.735 "trtype": "TCP", 00:17:07.735 "adrfam": "IPv4", 00:17:07.735 "traddr": "10.0.0.1", 00:17:07.735 "trsvcid": "45792" 00:17:07.735 }, 00:17:07.735 "auth": { 00:17:07.735 "state": "completed", 00:17:07.735 "digest": "sha256", 00:17:07.735 "dhgroup": "null" 00:17:07.735 } 00:17:07.735 } 00:17:07.735 ]' 00:17:07.735 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.994 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.253 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:08.253 05:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.542 05:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.801 00:17:11.801 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.801 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.801 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.072 { 00:17:12.072 "cntlid": 3, 00:17:12.072 "qid": 0, 00:17:12.072 "state": "enabled", 00:17:12.072 "thread": "nvmf_tgt_poll_group_000", 00:17:12.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:12.072 "listen_address": { 00:17:12.072 "trtype": "TCP", 00:17:12.072 "adrfam": "IPv4", 00:17:12.072 "traddr": "10.0.0.2", 00:17:12.072 "trsvcid": "4420" 00:17:12.072 }, 00:17:12.072 "peer_address": { 00:17:12.072 "trtype": "TCP", 00:17:12.072 "adrfam": "IPv4", 00:17:12.072 "traddr": "10.0.0.1", 00:17:12.072 "trsvcid": "54008" 00:17:12.072 }, 00:17:12.072 "auth": { 00:17:12.072 "state": "completed", 00:17:12.072 "digest": "sha256", 00:17:12.072 "dhgroup": "null" 00:17:12.072 } 00:17:12.072 } 00:17:12.072 ]' 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:12.072 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.331 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.331 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.331 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.331 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:12.331 05:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.898 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.157 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.415 00:17:13.415 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.415 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.415 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.685 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.685 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.685 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.685 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.685 { 00:17:13.685 "cntlid": 5, 00:17:13.685 "qid": 0, 00:17:13.685 "state": "enabled", 00:17:13.685 "thread": "nvmf_tgt_poll_group_000", 00:17:13.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:13.685 "listen_address": { 00:17:13.685 "trtype": "TCP", 00:17:13.685 "adrfam": "IPv4", 00:17:13.685 "traddr": "10.0.0.2", 00:17:13.685 "trsvcid": "4420" 00:17:13.685 }, 00:17:13.685 "peer_address": { 00:17:13.685 "trtype": "TCP", 00:17:13.685 "adrfam": "IPv4", 00:17:13.685 "traddr": "10.0.0.1", 00:17:13.685 "trsvcid": "54028" 00:17:13.685 }, 00:17:13.685 "auth": { 00:17:13.685 "state": "completed", 00:17:13.685 "digest": "sha256", 00:17:13.685 "dhgroup": "null" 00:17:13.685 } 00:17:13.685 } 00:17:13.685 ]' 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.685 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.943 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:13.943 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.511 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.770 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.771 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.030 00:17:15.030 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.030 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.030 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.290 { 00:17:15.290 "cntlid": 7, 00:17:15.290 "qid": 0, 00:17:15.290 "state": "enabled", 00:17:15.290 "thread": "nvmf_tgt_poll_group_000", 00:17:15.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:15.290 "listen_address": { 00:17:15.290 "trtype": "TCP", 00:17:15.290 "adrfam": "IPv4", 00:17:15.290 "traddr": "10.0.0.2", 00:17:15.290 "trsvcid": "4420" 00:17:15.290 }, 00:17:15.290 "peer_address": { 00:17:15.290 "trtype": "TCP", 00:17:15.290 "adrfam": "IPv4", 00:17:15.290 "traddr": "10.0.0.1", 00:17:15.290 "trsvcid": "54064" 00:17:15.290 }, 00:17:15.290 "auth": { 00:17:15.290 "state": "completed", 00:17:15.290 "digest": "sha256", 00:17:15.290 "dhgroup": "null" 00:17:15.290 } 00:17:15.290 } 00:17:15.290 ]' 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.290 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.549 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:15.549 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.118 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.377 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.636 00:17:16.636 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.636 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.636 05:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.896 { 00:17:16.896 "cntlid": 9, 00:17:16.896 "qid": 0, 00:17:16.896 "state": "enabled", 00:17:16.896 "thread": "nvmf_tgt_poll_group_000", 00:17:16.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:16.896 "listen_address": { 00:17:16.896 "trtype": "TCP", 00:17:16.896 "adrfam": "IPv4", 00:17:16.896 "traddr": "10.0.0.2", 00:17:16.896 "trsvcid": "4420" 00:17:16.896 }, 00:17:16.896 "peer_address": { 00:17:16.896 "trtype": "TCP", 00:17:16.896 "adrfam": "IPv4", 00:17:16.896 "traddr": "10.0.0.1", 00:17:16.896 "trsvcid": "54076" 00:17:16.896 }, 00:17:16.896 "auth": { 00:17:16.896 "state": "completed", 00:17:16.896 "digest": "sha256", 00:17:16.896 "dhgroup": "ffdhe2048" 00:17:16.896 } 00:17:16.896 } 00:17:16.896 ]' 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.896 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.156 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:17.156 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:17.725 05:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.725 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.985 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.244 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.244 { 00:17:18.244 "cntlid": 11, 00:17:18.244 "qid": 0, 00:17:18.244 "state": "enabled", 00:17:18.244 "thread": "nvmf_tgt_poll_group_000", 00:17:18.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:18.244 "listen_address": { 00:17:18.244 "trtype": "TCP", 00:17:18.244 "adrfam": "IPv4", 00:17:18.244 "traddr": "10.0.0.2", 00:17:18.244 "trsvcid": "4420" 00:17:18.244 }, 00:17:18.244 "peer_address": { 00:17:18.244 "trtype": "TCP", 00:17:18.244 "adrfam": "IPv4", 00:17:18.244 "traddr": "10.0.0.1", 00:17:18.244 "trsvcid": "54110" 00:17:18.244 }, 00:17:18.244 "auth": { 00:17:18.244 "state": "completed", 00:17:18.244 "digest": "sha256", 00:17:18.244 "dhgroup": "ffdhe2048" 00:17:18.244 } 00:17:18.244 } 00:17:18.244 ]' 00:17:18.244 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.503 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.761 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:18.761 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.328 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.329 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.587 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.846 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.846 { 00:17:19.846 "cntlid": 13, 00:17:19.846 "qid": 0, 00:17:19.846 "state": "enabled", 00:17:19.846 "thread": "nvmf_tgt_poll_group_000", 00:17:19.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:19.846 "listen_address": { 00:17:19.846 "trtype": "TCP", 00:17:19.846 "adrfam": "IPv4", 00:17:19.846 "traddr": "10.0.0.2", 00:17:19.846 "trsvcid": "4420" 00:17:19.846 }, 00:17:19.846 "peer_address": { 00:17:19.846 "trtype": "TCP", 00:17:19.846 "adrfam": "IPv4", 00:17:19.846 "traddr": "10.0.0.1", 00:17:19.846 "trsvcid": "54138" 00:17:19.846 }, 00:17:19.846 "auth": { 00:17:19.846 "state": "completed", 00:17:19.846 "digest": "sha256", 00:17:19.846 "dhgroup": "ffdhe2048" 00:17:19.846 } 00:17:19.846 } 00:17:19.846 ]' 00:17:19.846 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.105 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.363 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:20.363 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.930 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.931 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.931 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.189 00:17:21.189 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.189 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.189 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.448 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.448 { 00:17:21.448 "cntlid": 15, 00:17:21.448 "qid": 0, 00:17:21.448 "state": "enabled", 00:17:21.448 "thread": "nvmf_tgt_poll_group_000", 00:17:21.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:21.448 "listen_address": { 00:17:21.448 "trtype": "TCP", 00:17:21.448 "adrfam": "IPv4", 00:17:21.448 "traddr": "10.0.0.2", 00:17:21.448 "trsvcid": "4420" 00:17:21.448 }, 00:17:21.448 "peer_address": { 00:17:21.448 "trtype": "TCP", 00:17:21.448 "adrfam": "IPv4", 00:17:21.448 "traddr": "10.0.0.1", 00:17:21.448 "trsvcid": "56168" 00:17:21.448 }, 00:17:21.448 "auth": { 00:17:21.448 "state": "completed", 00:17:21.448 "digest": "sha256", 00:17:21.448 "dhgroup": "ffdhe2048" 00:17:21.448 } 00:17:21.448 } 00:17:21.448 ]' 00:17:21.449 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.449 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.449 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.707 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.707 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.707 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.707 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.707 05:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.707 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:21.707 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.280 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.538 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.796 00:17:22.796 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.796 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.796 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.054 { 00:17:23.054 "cntlid": 17, 00:17:23.054 "qid": 0, 00:17:23.054 "state": "enabled", 00:17:23.054 "thread": "nvmf_tgt_poll_group_000", 00:17:23.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:23.054 "listen_address": { 00:17:23.054 "trtype": "TCP", 00:17:23.054 "adrfam": "IPv4", 00:17:23.054 "traddr": "10.0.0.2", 00:17:23.054 "trsvcid": "4420" 00:17:23.054 }, 00:17:23.054 "peer_address": { 00:17:23.054 "trtype": "TCP", 00:17:23.054 "adrfam": "IPv4", 00:17:23.054 "traddr": "10.0.0.1", 00:17:23.054 "trsvcid": "56208" 00:17:23.054 }, 00:17:23.054 "auth": { 00:17:23.054 "state": "completed", 00:17:23.054 "digest": "sha256", 00:17:23.054 "dhgroup": "ffdhe3072" 00:17:23.054 } 00:17:23.054 } 00:17:23.054 ]' 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.054 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.313 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.313 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.313 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.313 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:23.313 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.952 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.261 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.262 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.593 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.593 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.593 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.593 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.593 { 00:17:24.593 "cntlid": 19, 00:17:24.593 "qid": 0, 00:17:24.593 "state": "enabled", 00:17:24.593 "thread": "nvmf_tgt_poll_group_000", 00:17:24.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:24.593 "listen_address": { 00:17:24.593 "trtype": "TCP", 00:17:24.593 "adrfam": "IPv4", 00:17:24.593 "traddr": "10.0.0.2", 00:17:24.593 "trsvcid": "4420" 00:17:24.593 }, 00:17:24.593 "peer_address": { 00:17:24.593 "trtype": "TCP", 00:17:24.593 "adrfam": "IPv4", 00:17:24.593 "traddr": "10.0.0.1", 00:17:24.593 "trsvcid": "56240" 00:17:24.593 }, 00:17:24.593 "auth": { 00:17:24.593 "state": "completed", 00:17:24.593 "digest": "sha256", 00:17:24.593 "dhgroup": "ffdhe3072" 00:17:24.593 } 00:17:24.593 } 00:17:24.593 ]' 00:17:24.593 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.593 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.593 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:24.876 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.540 05:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.838 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.839 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.097 00:17:26.097 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.097 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.097 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.098 { 00:17:26.098 "cntlid": 21, 00:17:26.098 "qid": 0, 00:17:26.098 "state": "enabled", 00:17:26.098 "thread": "nvmf_tgt_poll_group_000", 00:17:26.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:26.098 "listen_address": { 00:17:26.098 "trtype": "TCP", 00:17:26.098 "adrfam": "IPv4", 00:17:26.098 "traddr": "10.0.0.2", 00:17:26.098 "trsvcid": "4420" 00:17:26.098 }, 00:17:26.098 "peer_address": { 00:17:26.098 "trtype": "TCP", 00:17:26.098 "adrfam": "IPv4", 00:17:26.098 "traddr": "10.0.0.1", 00:17:26.098 "trsvcid": "56254" 00:17:26.098 }, 00:17:26.098 "auth": { 00:17:26.098 "state": "completed", 00:17:26.098 "digest": "sha256", 00:17:26.098 "dhgroup": "ffdhe3072" 00:17:26.098 } 00:17:26.098 } 00:17:26.098 ]' 00:17:26.098 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.356 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.615 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:26.615 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.184 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.442 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.442 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.442 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.442 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.442 00:17:27.701 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.701 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.701 05:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.701 { 00:17:27.701 "cntlid": 23, 00:17:27.701 "qid": 0, 00:17:27.701 "state": "enabled", 00:17:27.701 "thread": "nvmf_tgt_poll_group_000", 00:17:27.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:27.701 "listen_address": { 00:17:27.701 "trtype": "TCP", 00:17:27.701 "adrfam": "IPv4", 00:17:27.701 "traddr": "10.0.0.2", 00:17:27.701 "trsvcid": "4420" 00:17:27.701 }, 00:17:27.701 "peer_address": { 00:17:27.701 "trtype": "TCP", 00:17:27.701 "adrfam": "IPv4", 00:17:27.701 "traddr": "10.0.0.1", 00:17:27.701 "trsvcid": "56274" 00:17:27.701 }, 00:17:27.701 "auth": { 00:17:27.701 "state": "completed", 00:17:27.701 "digest": "sha256", 00:17:27.701 "dhgroup": "ffdhe3072" 00:17:27.701 } 00:17:27.701 } 00:17:27.701 ]' 00:17:27.701 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.960 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.219 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:28.219 05:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.786 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.044 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.303 { 00:17:29.303 "cntlid": 25, 00:17:29.303 "qid": 0, 00:17:29.303 "state": "enabled", 00:17:29.303 "thread": "nvmf_tgt_poll_group_000", 00:17:29.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:29.303 "listen_address": { 00:17:29.303 "trtype": "TCP", 00:17:29.303 "adrfam": "IPv4", 00:17:29.303 "traddr": "10.0.0.2", 00:17:29.303 "trsvcid": "4420" 00:17:29.303 }, 00:17:29.303 "peer_address": { 00:17:29.303 "trtype": "TCP", 00:17:29.303 "adrfam": "IPv4", 00:17:29.303 "traddr": "10.0.0.1", 00:17:29.303 "trsvcid": "56302" 00:17:29.303 }, 00:17:29.303 "auth": { 00:17:29.303 "state": "completed", 00:17:29.303 "digest": "sha256", 00:17:29.303 "dhgroup": "ffdhe4096" 00:17:29.303 } 00:17:29.303 } 00:17:29.303 ]' 00:17:29.303 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.561 05:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.819 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:29.819 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:30.387 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.646 05:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.904 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.904 { 00:17:30.904 "cntlid": 27, 00:17:30.904 "qid": 0, 00:17:30.904 "state": "enabled", 00:17:30.904 "thread": "nvmf_tgt_poll_group_000", 00:17:30.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:30.904 "listen_address": { 00:17:30.904 "trtype": "TCP", 00:17:30.904 "adrfam": "IPv4", 00:17:30.904 "traddr": "10.0.0.2", 00:17:30.904 "trsvcid": "4420" 00:17:30.904 }, 00:17:30.904 "peer_address": { 00:17:30.904 "trtype": "TCP", 00:17:30.904 "adrfam": "IPv4", 00:17:30.904 "traddr": "10.0.0.1", 00:17:30.904 "trsvcid": "56318" 00:17:30.904 }, 00:17:30.904 "auth": { 00:17:30.904 "state": "completed", 00:17:30.904 "digest": "sha256", 00:17:30.904 "dhgroup": "ffdhe4096" 00:17:30.904 } 00:17:30.904 } 00:17:30.904 ]' 00:17:30.904 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.163 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.421 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:31.421 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.988 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.247 00:17:32.247 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.247 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.247 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.506 { 00:17:32.506 "cntlid": 29, 00:17:32.506 "qid": 0, 00:17:32.506 "state": "enabled", 00:17:32.506 "thread": "nvmf_tgt_poll_group_000", 00:17:32.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:32.506 "listen_address": { 00:17:32.506 "trtype": "TCP", 00:17:32.506 "adrfam": "IPv4", 00:17:32.506 "traddr": "10.0.0.2", 00:17:32.506 "trsvcid": "4420" 00:17:32.506 }, 00:17:32.506 "peer_address": { 00:17:32.506 "trtype": "TCP", 00:17:32.506 "adrfam": "IPv4", 00:17:32.506 "traddr": "10.0.0.1", 00:17:32.506 "trsvcid": "55608" 00:17:32.506 }, 00:17:32.506 "auth": { 00:17:32.506 "state": "completed", 00:17:32.506 "digest": "sha256", 00:17:32.506 "dhgroup": "ffdhe4096" 00:17:32.506 } 00:17:32.506 } 00:17:32.506 ]' 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.506 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.773 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:32.773 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:33.340 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.599 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.858 00:17:33.858 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.858 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.858 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.116 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.116 { 00:17:34.116 "cntlid": 31, 00:17:34.116 "qid": 0, 00:17:34.116 "state": "enabled", 00:17:34.116 "thread": "nvmf_tgt_poll_group_000", 00:17:34.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:34.117 "listen_address": { 00:17:34.117 "trtype": "TCP", 00:17:34.117 "adrfam": "IPv4", 00:17:34.117 "traddr": "10.0.0.2", 00:17:34.117 "trsvcid": "4420" 00:17:34.117 }, 00:17:34.117 "peer_address": { 00:17:34.117 "trtype": "TCP", 00:17:34.117 "adrfam": "IPv4", 00:17:34.117 "traddr": "10.0.0.1", 00:17:34.117 "trsvcid": "55644" 00:17:34.117 }, 00:17:34.117 "auth": { 00:17:34.117 "state": "completed", 00:17:34.117 "digest": "sha256", 00:17:34.117 "dhgroup": "ffdhe4096" 00:17:34.117 } 00:17:34.117 } 00:17:34.117 ]' 00:17:34.117 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.117 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.117 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.117 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.117 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.375 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.375 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.375 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.375 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:34.375 05:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.942 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.201 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.461 00:17:35.461 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.461 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.461 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.719 { 00:17:35.719 "cntlid": 33, 00:17:35.719 "qid": 0, 00:17:35.719 "state": "enabled", 00:17:35.719 "thread": "nvmf_tgt_poll_group_000", 00:17:35.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:35.719 "listen_address": { 00:17:35.719 "trtype": "TCP", 00:17:35.719 "adrfam": "IPv4", 00:17:35.719 "traddr": "10.0.0.2", 00:17:35.719 "trsvcid": "4420" 00:17:35.719 }, 00:17:35.719 "peer_address": { 00:17:35.719 "trtype": "TCP", 00:17:35.719 "adrfam": "IPv4", 00:17:35.719 "traddr": "10.0.0.1", 00:17:35.719 "trsvcid": "55676" 00:17:35.719 }, 00:17:35.719 "auth": { 00:17:35.719 "state": "completed", 00:17:35.719 "digest": "sha256", 00:17:35.719 "dhgroup": "ffdhe6144" 00:17:35.719 } 00:17:35.719 } 00:17:35.719 ]' 00:17:35.719 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.720 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.720 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.720 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.720 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.978 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.978 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.978 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.978 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:35.978 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:36.546 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.546 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:36.546 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.546 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.546 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.546 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.546 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:36.546 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:36.804 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:36.804 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.804 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.805 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.373 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.373 { 00:17:37.373 "cntlid": 35, 00:17:37.373 "qid": 0, 00:17:37.373 "state": "enabled", 00:17:37.373 "thread": "nvmf_tgt_poll_group_000", 00:17:37.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:37.373 "listen_address": { 00:17:37.373 "trtype": "TCP", 00:17:37.373 "adrfam": "IPv4", 00:17:37.373 "traddr": "10.0.0.2", 00:17:37.373 "trsvcid": "4420" 00:17:37.373 }, 00:17:37.373 "peer_address": { 00:17:37.373 "trtype": "TCP", 00:17:37.373 "adrfam": "IPv4", 00:17:37.373 "traddr": "10.0.0.1", 00:17:37.373 "trsvcid": "55700" 00:17:37.373 }, 00:17:37.373 "auth": { 00:17:37.373 "state": "completed", 00:17:37.373 "digest": "sha256", 00:17:37.373 "dhgroup": "ffdhe6144" 00:17:37.373 } 00:17:37.373 } 00:17:37.373 ]' 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.373 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.632 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.632 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.632 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.632 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.632 05:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.890 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:37.890 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.457 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.023 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.023 { 00:17:39.023 "cntlid": 37, 00:17:39.023 "qid": 0, 00:17:39.023 "state": "enabled", 00:17:39.023 "thread": "nvmf_tgt_poll_group_000", 00:17:39.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:39.023 "listen_address": { 00:17:39.023 "trtype": "TCP", 00:17:39.023 "adrfam": "IPv4", 00:17:39.023 "traddr": "10.0.0.2", 00:17:39.023 "trsvcid": "4420" 00:17:39.023 }, 00:17:39.023 "peer_address": { 00:17:39.023 "trtype": "TCP", 00:17:39.023 "adrfam": "IPv4", 00:17:39.023 "traddr": "10.0.0.1", 00:17:39.023 "trsvcid": "55730" 00:17:39.023 }, 00:17:39.023 "auth": { 00:17:39.023 "state": "completed", 00:17:39.023 "digest": "sha256", 00:17:39.023 "dhgroup": "ffdhe6144" 00:17:39.023 } 00:17:39.023 } 00:17:39.023 ]' 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.023 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.281 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.281 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.281 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.281 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.281 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.538 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:39.538 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:39.848 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.849 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.107 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.673 00:17:40.673 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.673 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.673 05:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.673 { 00:17:40.673 "cntlid": 39, 00:17:40.673 "qid": 0, 00:17:40.673 "state": "enabled", 00:17:40.673 "thread": "nvmf_tgt_poll_group_000", 00:17:40.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:40.673 "listen_address": { 00:17:40.673 "trtype": "TCP", 00:17:40.673 "adrfam": "IPv4", 00:17:40.673 "traddr": "10.0.0.2", 00:17:40.673 "trsvcid": "4420" 00:17:40.673 }, 00:17:40.673 "peer_address": { 00:17:40.673 "trtype": "TCP", 00:17:40.673 "adrfam": "IPv4", 00:17:40.673 "traddr": "10.0.0.1", 00:17:40.673 "trsvcid": "55744" 00:17:40.673 }, 00:17:40.673 "auth": { 00:17:40.673 "state": "completed", 00:17:40.673 "digest": "sha256", 00:17:40.673 "dhgroup": "ffdhe6144" 00:17:40.673 } 00:17:40.673 } 00:17:40.673 ]' 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.673 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.930 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.930 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.930 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.930 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.930 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.186 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:41.186 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:41.752 05:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.752 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.319 00:17:42.319 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.319 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.319 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.578 { 00:17:42.578 "cntlid": 41, 00:17:42.578 "qid": 0, 00:17:42.578 "state": "enabled", 00:17:42.578 "thread": "nvmf_tgt_poll_group_000", 00:17:42.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:42.578 "listen_address": { 00:17:42.578 "trtype": "TCP", 00:17:42.578 "adrfam": "IPv4", 00:17:42.578 "traddr": "10.0.0.2", 00:17:42.578 "trsvcid": "4420" 00:17:42.578 }, 00:17:42.578 "peer_address": { 00:17:42.578 "trtype": "TCP", 00:17:42.578 "adrfam": "IPv4", 00:17:42.578 "traddr": "10.0.0.1", 00:17:42.578 "trsvcid": "60532" 00:17:42.578 }, 00:17:42.578 "auth": { 00:17:42.578 "state": "completed", 00:17:42.578 "digest": "sha256", 00:17:42.578 "dhgroup": "ffdhe8192" 00:17:42.578 } 00:17:42.578 } 00:17:42.578 ]' 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.578 05:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.578 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.578 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.578 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.837 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:42.837 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.403 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.662 05:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:43.662 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.662 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.662 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.662 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.663 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.233 00:17:44.233 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.233 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.233 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.492 { 00:17:44.492 "cntlid": 43, 00:17:44.492 "qid": 0, 00:17:44.492 "state": "enabled", 00:17:44.492 "thread": "nvmf_tgt_poll_group_000", 00:17:44.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:44.492 "listen_address": { 00:17:44.492 "trtype": "TCP", 00:17:44.492 "adrfam": "IPv4", 00:17:44.492 "traddr": "10.0.0.2", 00:17:44.492 "trsvcid": "4420" 00:17:44.492 }, 00:17:44.492 "peer_address": { 00:17:44.492 "trtype": "TCP", 00:17:44.492 "adrfam": "IPv4", 00:17:44.492 "traddr": "10.0.0.1", 00:17:44.492 "trsvcid": "60568" 00:17:44.492 }, 00:17:44.492 "auth": { 00:17:44.492 "state": "completed", 00:17:44.492 "digest": "sha256", 00:17:44.492 "dhgroup": "ffdhe8192" 00:17:44.492 } 00:17:44.492 } 00:17:44.492 ]' 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.492 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.750 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:44.750 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.337 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.596 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.597 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.597 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.597 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.854 00:17:45.854 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.854 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.854 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.111 { 00:17:46.111 "cntlid": 45, 00:17:46.111 "qid": 0, 00:17:46.111 "state": "enabled", 00:17:46.111 "thread": "nvmf_tgt_poll_group_000", 00:17:46.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:46.111 "listen_address": { 00:17:46.111 "trtype": "TCP", 00:17:46.111 "adrfam": "IPv4", 00:17:46.111 "traddr": "10.0.0.2", 00:17:46.111 "trsvcid": "4420" 00:17:46.111 }, 00:17:46.111 "peer_address": { 00:17:46.111 "trtype": "TCP", 00:17:46.111 "adrfam": "IPv4", 00:17:46.111 "traddr": "10.0.0.1", 00:17:46.111 "trsvcid": "60596" 00:17:46.111 }, 00:17:46.111 "auth": { 00:17:46.111 "state": "completed", 00:17:46.111 "digest": "sha256", 00:17:46.111 "dhgroup": "ffdhe8192" 00:17:46.111 } 00:17:46.111 } 00:17:46.111 ]' 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.111 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.369 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.369 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.369 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.369 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.369 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.627 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:46.627 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.195 05:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.825 00:17:47.825 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.825 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.825 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.083 { 00:17:48.083 "cntlid": 47, 00:17:48.083 "qid": 0, 00:17:48.083 "state": "enabled", 00:17:48.083 "thread": "nvmf_tgt_poll_group_000", 00:17:48.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:48.083 "listen_address": { 00:17:48.083 "trtype": "TCP", 00:17:48.083 "adrfam": "IPv4", 00:17:48.083 "traddr": "10.0.0.2", 00:17:48.083 "trsvcid": "4420" 00:17:48.083 }, 00:17:48.083 "peer_address": { 00:17:48.083 "trtype": "TCP", 00:17:48.083 "adrfam": "IPv4", 00:17:48.083 "traddr": "10.0.0.1", 00:17:48.083 "trsvcid": "60634" 00:17:48.083 }, 00:17:48.083 "auth": { 00:17:48.083 "state": "completed", 00:17:48.083 "digest": "sha256", 00:17:48.083 "dhgroup": "ffdhe8192" 00:17:48.083 } 00:17:48.083 } 00:17:48.083 ]' 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.083 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.341 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:48.341 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:48.906 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.164 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.434 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.434 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.692 { 00:17:49.692 "cntlid": 49, 00:17:49.692 "qid": 0, 00:17:49.692 "state": "enabled", 00:17:49.692 "thread": "nvmf_tgt_poll_group_000", 00:17:49.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:49.692 "listen_address": { 00:17:49.692 "trtype": "TCP", 00:17:49.692 "adrfam": "IPv4", 00:17:49.692 "traddr": "10.0.0.2", 00:17:49.692 "trsvcid": "4420" 00:17:49.692 }, 00:17:49.692 "peer_address": { 00:17:49.692 "trtype": "TCP", 00:17:49.692 "adrfam": "IPv4", 00:17:49.692 "traddr": "10.0.0.1", 00:17:49.692 "trsvcid": "60674" 00:17:49.692 }, 00:17:49.692 "auth": { 00:17:49.692 "state": "completed", 00:17:49.692 "digest": "sha384", 00:17:49.692 "dhgroup": "null" 00:17:49.692 } 00:17:49.692 } 00:17:49.692 ]' 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.692 05:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.692 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.692 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.692 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.951 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:49.951 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:50.519 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.778 05:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.778 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.778 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.778 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.778 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.778 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.036 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.036 { 00:17:51.036 "cntlid": 51, 00:17:51.036 "qid": 0, 00:17:51.036 "state": "enabled", 00:17:51.036 "thread": "nvmf_tgt_poll_group_000", 00:17:51.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:51.036 "listen_address": { 00:17:51.036 "trtype": "TCP", 00:17:51.036 "adrfam": "IPv4", 00:17:51.036 "traddr": "10.0.0.2", 00:17:51.036 "trsvcid": "4420" 00:17:51.036 }, 00:17:51.036 "peer_address": { 00:17:51.036 "trtype": "TCP", 00:17:51.036 "adrfam": "IPv4", 00:17:51.036 "traddr": "10.0.0.1", 00:17:51.036 "trsvcid": "60688" 00:17:51.036 }, 00:17:51.036 "auth": { 00:17:51.036 "state": "completed", 00:17:51.036 "digest": "sha384", 00:17:51.037 "dhgroup": "null" 00:17:51.037 } 00:17:51.037 } 00:17:51.037 ]' 00:17:51.037 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.296 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.555 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:51.555 05:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:52.122 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.123 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.382 00:17:52.382 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.382 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.382 05:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.640 { 00:17:52.640 "cntlid": 53, 00:17:52.640 "qid": 0, 00:17:52.640 "state": "enabled", 00:17:52.640 "thread": "nvmf_tgt_poll_group_000", 00:17:52.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:52.640 "listen_address": { 00:17:52.640 "trtype": "TCP", 00:17:52.640 "adrfam": "IPv4", 00:17:52.640 "traddr": "10.0.0.2", 00:17:52.640 "trsvcid": "4420" 00:17:52.640 }, 00:17:52.640 "peer_address": { 00:17:52.640 "trtype": "TCP", 00:17:52.640 "adrfam": "IPv4", 00:17:52.640 "traddr": "10.0.0.1", 00:17:52.640 "trsvcid": "34392" 00:17:52.640 }, 00:17:52.640 "auth": { 00:17:52.640 "state": "completed", 00:17:52.640 "digest": "sha384", 00:17:52.640 "dhgroup": "null" 00:17:52.640 } 00:17:52.640 } 00:17:52.640 ]' 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.640 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.898 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:52.898 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.898 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.898 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.898 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.156 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:53.156 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.724 05:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.724 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:53.724 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.724 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.724 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.725 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.983 00:17:53.983 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.983 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.984 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.242 { 00:17:54.242 "cntlid": 55, 00:17:54.242 "qid": 0, 00:17:54.242 "state": "enabled", 00:17:54.242 "thread": "nvmf_tgt_poll_group_000", 00:17:54.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:54.242 "listen_address": { 00:17:54.242 "trtype": "TCP", 00:17:54.242 "adrfam": "IPv4", 00:17:54.242 "traddr": "10.0.0.2", 00:17:54.242 "trsvcid": "4420" 00:17:54.242 }, 00:17:54.242 "peer_address": { 00:17:54.242 "trtype": "TCP", 00:17:54.242 "adrfam": "IPv4", 00:17:54.242 "traddr": "10.0.0.1", 00:17:54.242 "trsvcid": "34412" 00:17:54.242 }, 00:17:54.242 "auth": { 00:17:54.242 "state": "completed", 00:17:54.242 "digest": "sha384", 00:17:54.242 "dhgroup": "null" 00:17:54.242 } 00:17:54.242 } 00:17:54.242 ]' 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.242 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.501 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.501 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.501 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.501 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:54.501 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.068 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.326 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:55.326 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.326 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.326 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.326 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.327 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.585 00:17:55.585 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.585 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.585 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.844 { 00:17:55.844 "cntlid": 57, 00:17:55.844 "qid": 0, 00:17:55.844 "state": "enabled", 00:17:55.844 "thread": "nvmf_tgt_poll_group_000", 00:17:55.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:55.844 "listen_address": { 00:17:55.844 "trtype": "TCP", 00:17:55.844 "adrfam": "IPv4", 00:17:55.844 "traddr": "10.0.0.2", 00:17:55.844 "trsvcid": "4420" 00:17:55.844 }, 00:17:55.844 "peer_address": { 00:17:55.844 "trtype": "TCP", 00:17:55.844 "adrfam": "IPv4", 00:17:55.844 "traddr": "10.0.0.1", 00:17:55.844 "trsvcid": "34454" 00:17:55.844 }, 00:17:55.844 "auth": { 00:17:55.844 "state": "completed", 00:17:55.844 "digest": "sha384", 00:17:55.844 "dhgroup": "ffdhe2048" 00:17:55.844 } 00:17:55.844 } 00:17:55.844 ]' 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.844 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.103 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:56.103 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:56.669 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.927 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.185 00:17:57.185 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.185 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.185 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.444 { 00:17:57.444 "cntlid": 59, 00:17:57.444 "qid": 0, 00:17:57.444 "state": "enabled", 00:17:57.444 "thread": "nvmf_tgt_poll_group_000", 00:17:57.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:57.444 "listen_address": { 00:17:57.444 "trtype": "TCP", 00:17:57.444 "adrfam": "IPv4", 00:17:57.444 "traddr": "10.0.0.2", 00:17:57.444 "trsvcid": "4420" 00:17:57.444 }, 00:17:57.444 "peer_address": { 00:17:57.444 "trtype": "TCP", 00:17:57.444 "adrfam": "IPv4", 00:17:57.444 "traddr": "10.0.0.1", 00:17:57.444 "trsvcid": "34476" 00:17:57.444 }, 00:17:57.444 "auth": { 00:17:57.444 "state": "completed", 00:17:57.444 "digest": "sha384", 00:17:57.444 "dhgroup": "ffdhe2048" 00:17:57.444 } 00:17:57.444 } 00:17:57.444 ]' 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.444 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.703 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:57.703 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:58.269 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.527 05:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.786 00:17:58.786 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.786 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.786 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.044 { 00:17:59.044 "cntlid": 61, 00:17:59.044 "qid": 0, 00:17:59.044 "state": "enabled", 00:17:59.044 "thread": "nvmf_tgt_poll_group_000", 00:17:59.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:17:59.044 "listen_address": { 00:17:59.044 "trtype": "TCP", 00:17:59.044 "adrfam": "IPv4", 00:17:59.044 "traddr": "10.0.0.2", 00:17:59.044 "trsvcid": "4420" 00:17:59.044 }, 00:17:59.044 "peer_address": { 00:17:59.044 "trtype": "TCP", 00:17:59.044 "adrfam": "IPv4", 00:17:59.044 "traddr": "10.0.0.1", 00:17:59.044 "trsvcid": "34494" 00:17:59.044 }, 00:17:59.044 "auth": { 00:17:59.044 "state": "completed", 00:17:59.044 "digest": "sha384", 00:17:59.044 "dhgroup": "ffdhe2048" 00:17:59.044 } 00:17:59.044 } 00:17:59.044 ]' 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.044 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.302 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:59.302 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:59.871 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.129 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:00.129 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.129 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.129 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.130 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.388 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.388 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.388 { 00:18:00.388 "cntlid": 63, 00:18:00.388 "qid": 0, 00:18:00.388 "state": "enabled", 00:18:00.388 "thread": "nvmf_tgt_poll_group_000", 00:18:00.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:00.389 "listen_address": { 00:18:00.389 "trtype": "TCP", 00:18:00.389 "adrfam": "IPv4", 00:18:00.389 "traddr": "10.0.0.2", 00:18:00.389 "trsvcid": "4420" 00:18:00.389 }, 00:18:00.389 "peer_address": { 00:18:00.389 "trtype": "TCP", 00:18:00.389 "adrfam": "IPv4", 00:18:00.389 "traddr": "10.0.0.1", 00:18:00.389 "trsvcid": "34526" 00:18:00.389 }, 00:18:00.389 "auth": { 00:18:00.389 "state": "completed", 00:18:00.389 "digest": "sha384", 00:18:00.389 "dhgroup": "ffdhe2048" 00:18:00.389 } 00:18:00.389 } 00:18:00.389 ]' 00:18:00.389 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.648 05:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.907 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:00.907 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.474 05:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.733 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.992 { 00:18:01.992 "cntlid": 65, 00:18:01.992 "qid": 0, 00:18:01.992 "state": "enabled", 00:18:01.992 "thread": "nvmf_tgt_poll_group_000", 00:18:01.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:01.992 "listen_address": { 00:18:01.992 "trtype": "TCP", 00:18:01.992 "adrfam": "IPv4", 00:18:01.992 "traddr": "10.0.0.2", 00:18:01.992 "trsvcid": "4420" 00:18:01.992 }, 00:18:01.992 "peer_address": { 00:18:01.992 "trtype": "TCP", 00:18:01.992 "adrfam": "IPv4", 00:18:01.992 "traddr": "10.0.0.1", 00:18:01.992 "trsvcid": "40842" 00:18:01.992 }, 00:18:01.992 "auth": { 00:18:01.992 "state": "completed", 00:18:01.992 "digest": "sha384", 00:18:01.992 "dhgroup": "ffdhe3072" 00:18:01.992 } 00:18:01.992 } 00:18:01.992 ]' 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.992 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.251 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.251 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.251 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.251 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.251 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.510 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:02.510 05:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:03.085 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.085 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.086 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.344 00:18:03.344 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.344 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.344 05:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.602 { 00:18:03.602 "cntlid": 67, 00:18:03.602 "qid": 0, 00:18:03.602 "state": "enabled", 00:18:03.602 "thread": "nvmf_tgt_poll_group_000", 00:18:03.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:03.602 "listen_address": { 00:18:03.602 "trtype": "TCP", 00:18:03.602 "adrfam": "IPv4", 00:18:03.602 "traddr": "10.0.0.2", 00:18:03.602 "trsvcid": "4420" 00:18:03.602 }, 00:18:03.602 "peer_address": { 00:18:03.602 "trtype": "TCP", 00:18:03.602 "adrfam": "IPv4", 00:18:03.602 "traddr": "10.0.0.1", 00:18:03.602 "trsvcid": "40854" 00:18:03.602 }, 00:18:03.602 "auth": { 00:18:03.602 "state": "completed", 00:18:03.602 "digest": "sha384", 00:18:03.602 "dhgroup": "ffdhe3072" 00:18:03.602 } 00:18:03.602 } 00:18:03.602 ]' 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.602 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.860 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.860 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.860 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.860 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.860 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.119 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:04.119 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.687 05:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.687 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.945 00:18:04.945 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.945 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.945 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.204 { 00:18:05.204 "cntlid": 69, 00:18:05.204 "qid": 0, 00:18:05.204 "state": "enabled", 00:18:05.204 "thread": "nvmf_tgt_poll_group_000", 00:18:05.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:05.204 "listen_address": { 00:18:05.204 "trtype": "TCP", 00:18:05.204 "adrfam": "IPv4", 00:18:05.204 "traddr": "10.0.0.2", 00:18:05.204 "trsvcid": "4420" 00:18:05.204 }, 00:18:05.204 "peer_address": { 00:18:05.204 "trtype": "TCP", 00:18:05.204 "adrfam": "IPv4", 00:18:05.204 "traddr": "10.0.0.1", 00:18:05.204 "trsvcid": "40876" 00:18:05.204 }, 00:18:05.204 "auth": { 00:18:05.204 "state": "completed", 00:18:05.204 "digest": "sha384", 00:18:05.204 "dhgroup": "ffdhe3072" 00:18:05.204 } 00:18:05.204 } 00:18:05.204 ]' 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.204 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.462 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.462 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.462 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.462 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.462 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.463 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:05.463 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:06.029 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.029 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:06.029 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.029 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.287 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.544 00:18:06.544 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.544 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.544 05:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.802 { 00:18:06.802 "cntlid": 71, 00:18:06.802 "qid": 0, 00:18:06.802 "state": "enabled", 00:18:06.802 "thread": "nvmf_tgt_poll_group_000", 00:18:06.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:06.802 "listen_address": { 00:18:06.802 "trtype": "TCP", 00:18:06.802 "adrfam": "IPv4", 00:18:06.802 "traddr": "10.0.0.2", 00:18:06.802 "trsvcid": "4420" 00:18:06.802 }, 00:18:06.802 "peer_address": { 00:18:06.802 "trtype": "TCP", 00:18:06.802 "adrfam": "IPv4", 00:18:06.802 "traddr": "10.0.0.1", 00:18:06.802 "trsvcid": "40898" 00:18:06.802 }, 00:18:06.802 "auth": { 00:18:06.802 "state": "completed", 00:18:06.802 "digest": "sha384", 00:18:06.802 "dhgroup": "ffdhe3072" 00:18:06.802 } 00:18:06.802 } 00:18:06.802 ]' 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.802 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.060 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.060 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.060 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.060 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.060 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.318 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:07.318 05:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.884 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.142 00:18:08.142 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.142 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.142 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.401 { 00:18:08.401 "cntlid": 73, 00:18:08.401 "qid": 0, 00:18:08.401 "state": "enabled", 00:18:08.401 "thread": "nvmf_tgt_poll_group_000", 00:18:08.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:08.401 "listen_address": { 00:18:08.401 "trtype": "TCP", 00:18:08.401 "adrfam": "IPv4", 00:18:08.401 "traddr": "10.0.0.2", 00:18:08.401 "trsvcid": "4420" 00:18:08.401 }, 00:18:08.401 "peer_address": { 00:18:08.401 "trtype": "TCP", 00:18:08.401 "adrfam": "IPv4", 00:18:08.401 "traddr": "10.0.0.1", 00:18:08.401 "trsvcid": "40912" 00:18:08.401 }, 00:18:08.401 "auth": { 00:18:08.401 "state": "completed", 00:18:08.401 "digest": "sha384", 00:18:08.401 "dhgroup": "ffdhe4096" 00:18:08.401 } 00:18:08.401 } 00:18:08.401 ]' 00:18:08.401 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.660 05:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.918 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:08.918 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.486 05:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.744 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.003 { 00:18:10.003 "cntlid": 75, 00:18:10.003 "qid": 0, 00:18:10.003 "state": "enabled", 00:18:10.003 "thread": "nvmf_tgt_poll_group_000", 00:18:10.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:10.003 "listen_address": { 00:18:10.003 "trtype": "TCP", 00:18:10.003 "adrfam": "IPv4", 00:18:10.003 "traddr": "10.0.0.2", 00:18:10.003 "trsvcid": "4420" 00:18:10.003 }, 00:18:10.003 "peer_address": { 00:18:10.003 "trtype": "TCP", 00:18:10.003 "adrfam": "IPv4", 00:18:10.003 "traddr": "10.0.0.1", 00:18:10.003 "trsvcid": "40940" 00:18:10.003 }, 00:18:10.003 "auth": { 00:18:10.003 "state": "completed", 00:18:10.003 "digest": "sha384", 00:18:10.003 "dhgroup": "ffdhe4096" 00:18:10.003 } 00:18:10.003 } 00:18:10.003 ]' 00:18:10.003 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.262 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.520 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:10.520 05:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.089 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.348 00:18:11.608 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.608 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.608 05:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.608 { 00:18:11.608 "cntlid": 77, 00:18:11.608 "qid": 0, 00:18:11.608 "state": "enabled", 00:18:11.608 "thread": "nvmf_tgt_poll_group_000", 00:18:11.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:11.608 "listen_address": { 00:18:11.608 "trtype": "TCP", 00:18:11.608 "adrfam": "IPv4", 00:18:11.608 "traddr": "10.0.0.2", 00:18:11.608 "trsvcid": "4420" 00:18:11.608 }, 00:18:11.608 "peer_address": { 00:18:11.608 "trtype": "TCP", 00:18:11.608 "adrfam": "IPv4", 00:18:11.608 "traddr": "10.0.0.1", 00:18:11.608 "trsvcid": "43594" 00:18:11.608 }, 00:18:11.608 "auth": { 00:18:11.608 "state": "completed", 00:18:11.608 "digest": "sha384", 00:18:11.608 "dhgroup": "ffdhe4096" 00:18:11.608 } 00:18:11.608 } 00:18:11.608 ]' 00:18:11.608 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.867 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.124 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:12.124 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:12.691 05:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.691 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.949 00:18:12.949 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.949 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.949 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.208 { 00:18:13.208 "cntlid": 79, 00:18:13.208 "qid": 0, 00:18:13.208 "state": "enabled", 00:18:13.208 "thread": "nvmf_tgt_poll_group_000", 00:18:13.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:13.208 "listen_address": { 00:18:13.208 "trtype": "TCP", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "10.0.0.2", 00:18:13.208 "trsvcid": "4420" 00:18:13.208 }, 00:18:13.208 "peer_address": { 00:18:13.208 "trtype": "TCP", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "10.0.0.1", 00:18:13.208 "trsvcid": "43638" 00:18:13.208 }, 00:18:13.208 "auth": { 00:18:13.208 "state": "completed", 00:18:13.208 "digest": "sha384", 00:18:13.208 "dhgroup": "ffdhe4096" 00:18:13.208 } 00:18:13.208 } 00:18:13.208 ]' 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.208 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.466 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.466 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.466 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.466 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.466 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.724 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:13.724 05:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.291 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.292 05:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.859 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.859 { 00:18:14.859 "cntlid": 81, 00:18:14.859 "qid": 0, 00:18:14.859 "state": "enabled", 00:18:14.859 "thread": "nvmf_tgt_poll_group_000", 00:18:14.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:14.859 "listen_address": { 00:18:14.859 "trtype": "TCP", 00:18:14.859 "adrfam": "IPv4", 00:18:14.859 "traddr": "10.0.0.2", 00:18:14.859 "trsvcid": "4420" 00:18:14.859 }, 00:18:14.859 "peer_address": { 00:18:14.859 "trtype": "TCP", 00:18:14.859 "adrfam": "IPv4", 00:18:14.859 "traddr": "10.0.0.1", 00:18:14.859 "trsvcid": "43658" 00:18:14.859 }, 00:18:14.859 "auth": { 00:18:14.859 "state": "completed", 00:18:14.859 "digest": "sha384", 00:18:14.859 "dhgroup": "ffdhe6144" 00:18:14.859 } 00:18:14.859 } 00:18:14.859 ]' 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.859 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.118 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.118 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.118 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.118 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.118 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.377 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:15.377 05:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.946 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.515 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.516 { 00:18:16.516 "cntlid": 83, 00:18:16.516 "qid": 0, 00:18:16.516 "state": "enabled", 00:18:16.516 "thread": "nvmf_tgt_poll_group_000", 00:18:16.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:16.516 "listen_address": { 00:18:16.516 "trtype": "TCP", 00:18:16.516 "adrfam": "IPv4", 00:18:16.516 "traddr": "10.0.0.2", 00:18:16.516 "trsvcid": "4420" 00:18:16.516 }, 00:18:16.516 "peer_address": { 00:18:16.516 "trtype": "TCP", 00:18:16.516 "adrfam": "IPv4", 00:18:16.516 "traddr": "10.0.0.1", 00:18:16.516 "trsvcid": "43686" 00:18:16.516 }, 00:18:16.516 "auth": { 00:18:16.516 "state": "completed", 00:18:16.516 "digest": "sha384", 00:18:16.516 "dhgroup": "ffdhe6144" 00:18:16.516 } 00:18:16.516 } 00:18:16.516 ]' 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.516 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.775 05:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:16.775 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.343 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.603 05:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.603 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.603 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.603 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.603 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.188 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.188 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.188 { 00:18:18.188 "cntlid": 85, 00:18:18.188 "qid": 0, 00:18:18.188 "state": "enabled", 00:18:18.188 "thread": "nvmf_tgt_poll_group_000", 00:18:18.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:18.188 "listen_address": { 00:18:18.188 "trtype": "TCP", 00:18:18.188 "adrfam": "IPv4", 00:18:18.189 "traddr": "10.0.0.2", 00:18:18.189 "trsvcid": "4420" 00:18:18.189 }, 00:18:18.189 "peer_address": { 00:18:18.189 "trtype": "TCP", 00:18:18.189 "adrfam": "IPv4", 00:18:18.189 "traddr": "10.0.0.1", 00:18:18.189 "trsvcid": "43708" 00:18:18.189 }, 00:18:18.189 "auth": { 00:18:18.189 "state": "completed", 00:18:18.189 "digest": "sha384", 00:18:18.189 "dhgroup": "ffdhe6144" 00:18:18.189 } 00:18:18.189 } 00:18:18.189 ]' 00:18:18.189 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.189 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.189 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.189 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.189 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.448 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.448 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.448 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.448 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:18.448 05:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.035 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.293 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:19.293 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.293 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.294 05:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.552 00:18:19.552 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.552 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.552 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.810 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.810 { 00:18:19.810 "cntlid": 87, 00:18:19.810 "qid": 0, 00:18:19.810 "state": "enabled", 00:18:19.810 "thread": "nvmf_tgt_poll_group_000", 00:18:19.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:19.810 "listen_address": { 00:18:19.810 "trtype": "TCP", 00:18:19.810 "adrfam": "IPv4", 00:18:19.810 "traddr": "10.0.0.2", 00:18:19.810 "trsvcid": "4420" 00:18:19.810 }, 00:18:19.810 "peer_address": { 00:18:19.810 "trtype": "TCP", 00:18:19.810 "adrfam": "IPv4", 00:18:19.811 "traddr": "10.0.0.1", 00:18:19.811 "trsvcid": "43726" 00:18:19.811 }, 00:18:19.811 "auth": { 00:18:19.811 "state": "completed", 00:18:19.811 "digest": "sha384", 00:18:19.811 "dhgroup": "ffdhe6144" 00:18:19.811 } 00:18:19.811 } 00:18:19.811 ]' 00:18:19.811 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.811 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.811 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.068 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.068 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.068 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.068 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.068 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.327 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:20.327 05:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.893 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.459 00:18:21.459 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.459 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.459 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.717 { 00:18:21.717 "cntlid": 89, 00:18:21.717 "qid": 0, 00:18:21.717 "state": "enabled", 00:18:21.717 "thread": "nvmf_tgt_poll_group_000", 00:18:21.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:21.717 "listen_address": { 00:18:21.717 "trtype": "TCP", 00:18:21.717 "adrfam": "IPv4", 00:18:21.717 "traddr": "10.0.0.2", 00:18:21.717 "trsvcid": "4420" 00:18:21.717 }, 00:18:21.717 "peer_address": { 00:18:21.717 "trtype": "TCP", 00:18:21.717 "adrfam": "IPv4", 00:18:21.717 "traddr": "10.0.0.1", 00:18:21.717 "trsvcid": "52426" 00:18:21.717 }, 00:18:21.717 "auth": { 00:18:21.717 "state": "completed", 00:18:21.717 "digest": "sha384", 00:18:21.717 "dhgroup": "ffdhe8192" 00:18:21.717 } 00:18:21.717 } 00:18:21.717 ]' 00:18:21.717 05:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.717 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.975 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:21.975 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.541 05:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.800 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.059 00:18:23.317 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.317 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.317 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.575 { 00:18:23.575 "cntlid": 91, 00:18:23.575 "qid": 0, 00:18:23.575 "state": "enabled", 00:18:23.575 "thread": "nvmf_tgt_poll_group_000", 00:18:23.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:23.575 "listen_address": { 00:18:23.575 "trtype": "TCP", 00:18:23.575 "adrfam": "IPv4", 00:18:23.575 "traddr": "10.0.0.2", 00:18:23.575 "trsvcid": "4420" 00:18:23.575 }, 00:18:23.575 "peer_address": { 00:18:23.575 "trtype": "TCP", 00:18:23.575 "adrfam": "IPv4", 00:18:23.575 "traddr": "10.0.0.1", 00:18:23.575 "trsvcid": "52472" 00:18:23.575 }, 00:18:23.575 "auth": { 00:18:23.575 "state": "completed", 00:18:23.575 "digest": "sha384", 00:18:23.575 "dhgroup": "ffdhe8192" 00:18:23.575 } 00:18:23.575 } 00:18:23.575 ]' 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.575 05:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.834 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:23.834 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.402 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.661 05:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.920 00:18:24.920 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.920 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.920 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.178 { 00:18:25.178 "cntlid": 93, 00:18:25.178 "qid": 0, 00:18:25.178 "state": "enabled", 00:18:25.178 "thread": "nvmf_tgt_poll_group_000", 00:18:25.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:25.178 "listen_address": { 00:18:25.178 "trtype": "TCP", 00:18:25.178 "adrfam": "IPv4", 00:18:25.178 "traddr": "10.0.0.2", 00:18:25.178 "trsvcid": "4420" 00:18:25.178 }, 00:18:25.178 "peer_address": { 00:18:25.178 "trtype": "TCP", 00:18:25.178 "adrfam": "IPv4", 00:18:25.178 "traddr": "10.0.0.1", 00:18:25.178 "trsvcid": "52498" 00:18:25.178 }, 00:18:25.178 "auth": { 00:18:25.178 "state": "completed", 00:18:25.178 "digest": "sha384", 00:18:25.178 "dhgroup": "ffdhe8192" 00:18:25.178 } 00:18:25.178 } 00:18:25.178 ]' 00:18:25.178 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.437 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.696 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:25.696 05:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:26.264 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.265 05:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.833 00:18:26.833 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.833 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.833 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.092 { 00:18:27.092 "cntlid": 95, 00:18:27.092 "qid": 0, 00:18:27.092 "state": "enabled", 00:18:27.092 "thread": "nvmf_tgt_poll_group_000", 00:18:27.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:27.092 "listen_address": { 00:18:27.092 "trtype": "TCP", 00:18:27.092 "adrfam": "IPv4", 00:18:27.092 "traddr": "10.0.0.2", 00:18:27.092 "trsvcid": "4420" 00:18:27.092 }, 00:18:27.092 "peer_address": { 00:18:27.092 "trtype": "TCP", 00:18:27.092 "adrfam": "IPv4", 00:18:27.092 "traddr": "10.0.0.1", 00:18:27.092 "trsvcid": "52528" 00:18:27.092 }, 00:18:27.092 "auth": { 00:18:27.092 "state": "completed", 00:18:27.092 "digest": "sha384", 00:18:27.092 "dhgroup": "ffdhe8192" 00:18:27.092 } 00:18:27.092 } 00:18:27.092 ]' 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.092 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.352 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:27.352 05:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:27.920 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.921 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.180 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.439 00:18:28.439 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.439 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.439 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.698 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.698 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.698 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.698 05:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.698 { 00:18:28.698 "cntlid": 97, 00:18:28.698 "qid": 0, 00:18:28.698 "state": "enabled", 00:18:28.698 "thread": "nvmf_tgt_poll_group_000", 00:18:28.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:28.698 "listen_address": { 00:18:28.698 "trtype": "TCP", 00:18:28.698 "adrfam": "IPv4", 00:18:28.698 "traddr": "10.0.0.2", 00:18:28.698 "trsvcid": "4420" 00:18:28.698 }, 00:18:28.698 "peer_address": { 00:18:28.698 "trtype": "TCP", 00:18:28.698 "adrfam": "IPv4", 00:18:28.698 "traddr": "10.0.0.1", 00:18:28.698 "trsvcid": "52548" 00:18:28.698 }, 00:18:28.698 "auth": { 00:18:28.698 "state": "completed", 00:18:28.698 "digest": "sha512", 00:18:28.698 "dhgroup": "null" 00:18:28.698 } 00:18:28.698 } 00:18:28.698 ]' 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.698 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.957 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:28.957 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.526 05:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.784 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.785 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.043 00:18:30.043 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.043 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.043 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.301 { 00:18:30.301 "cntlid": 99, 00:18:30.301 "qid": 0, 00:18:30.301 "state": "enabled", 00:18:30.301 "thread": "nvmf_tgt_poll_group_000", 00:18:30.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:30.301 "listen_address": { 00:18:30.301 "trtype": "TCP", 00:18:30.301 "adrfam": "IPv4", 00:18:30.301 "traddr": "10.0.0.2", 00:18:30.301 "trsvcid": "4420" 00:18:30.301 }, 00:18:30.301 "peer_address": { 00:18:30.301 "trtype": "TCP", 00:18:30.301 "adrfam": "IPv4", 00:18:30.301 "traddr": "10.0.0.1", 00:18:30.301 "trsvcid": "52578" 00:18:30.301 }, 00:18:30.301 "auth": { 00:18:30.301 "state": "completed", 00:18:30.301 "digest": "sha512", 00:18:30.301 "dhgroup": "null" 00:18:30.301 } 00:18:30.301 } 00:18:30.301 ]' 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.301 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.302 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.302 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.302 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.302 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.302 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.560 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:30.560 05:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.127 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.385 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.643 00:18:31.643 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.643 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.643 05:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.643 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.643 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.643 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.643 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.901 { 00:18:31.901 "cntlid": 101, 00:18:31.901 "qid": 0, 00:18:31.901 "state": "enabled", 00:18:31.901 "thread": "nvmf_tgt_poll_group_000", 00:18:31.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:31.901 "listen_address": { 00:18:31.901 "trtype": "TCP", 00:18:31.901 "adrfam": "IPv4", 00:18:31.901 "traddr": "10.0.0.2", 00:18:31.901 "trsvcid": "4420" 00:18:31.901 }, 00:18:31.901 "peer_address": { 00:18:31.901 "trtype": "TCP", 00:18:31.901 "adrfam": "IPv4", 00:18:31.901 "traddr": "10.0.0.1", 00:18:31.901 "trsvcid": "54272" 00:18:31.901 }, 00:18:31.901 "auth": { 00:18:31.901 "state": "completed", 00:18:31.901 "digest": "sha512", 00:18:31.901 "dhgroup": "null" 00:18:31.901 } 00:18:31.901 } 00:18:31.901 ]' 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.901 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.160 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:32.160 05:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.726 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.983 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.984 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.984 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.241 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.241 { 00:18:33.241 "cntlid": 103, 00:18:33.241 "qid": 0, 00:18:33.241 "state": "enabled", 00:18:33.241 "thread": "nvmf_tgt_poll_group_000", 00:18:33.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:33.241 "listen_address": { 00:18:33.241 "trtype": "TCP", 00:18:33.241 "adrfam": "IPv4", 00:18:33.241 "traddr": "10.0.0.2", 00:18:33.241 "trsvcid": "4420" 00:18:33.241 }, 00:18:33.241 "peer_address": { 00:18:33.241 "trtype": "TCP", 00:18:33.241 "adrfam": "IPv4", 00:18:33.241 "traddr": "10.0.0.1", 00:18:33.241 "trsvcid": "54286" 00:18:33.241 }, 00:18:33.241 "auth": { 00:18:33.241 "state": "completed", 00:18:33.241 "digest": "sha512", 00:18:33.241 "dhgroup": "null" 00:18:33.241 } 00:18:33.241 } 00:18:33.241 ]' 00:18:33.241 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.498 05:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.756 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:33.756 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.324 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.582 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.582 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.582 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.582 05:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.582 00:18:34.582 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.582 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.582 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.843 { 00:18:34.843 "cntlid": 105, 00:18:34.843 "qid": 0, 00:18:34.843 "state": "enabled", 00:18:34.843 "thread": "nvmf_tgt_poll_group_000", 00:18:34.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:34.843 "listen_address": { 00:18:34.843 "trtype": "TCP", 00:18:34.843 "adrfam": "IPv4", 00:18:34.843 "traddr": "10.0.0.2", 00:18:34.843 "trsvcid": "4420" 00:18:34.843 }, 00:18:34.843 "peer_address": { 00:18:34.843 "trtype": "TCP", 00:18:34.843 "adrfam": "IPv4", 00:18:34.843 "traddr": "10.0.0.1", 00:18:34.843 "trsvcid": "54314" 00:18:34.843 }, 00:18:34.843 "auth": { 00:18:34.843 "state": "completed", 00:18:34.843 "digest": "sha512", 00:18:34.843 "dhgroup": "ffdhe2048" 00:18:34.843 } 00:18:34.843 } 00:18:34.843 ]' 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.843 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:35.102 05:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:35.669 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.928 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.929 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.188 00:18:36.188 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.188 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.188 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.446 { 00:18:36.446 "cntlid": 107, 00:18:36.446 "qid": 0, 00:18:36.446 "state": "enabled", 00:18:36.446 "thread": "nvmf_tgt_poll_group_000", 00:18:36.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:36.446 "listen_address": { 00:18:36.446 "trtype": "TCP", 00:18:36.446 "adrfam": "IPv4", 00:18:36.446 "traddr": "10.0.0.2", 00:18:36.446 "trsvcid": "4420" 00:18:36.446 }, 00:18:36.446 "peer_address": { 00:18:36.446 "trtype": "TCP", 00:18:36.446 "adrfam": "IPv4", 00:18:36.446 "traddr": "10.0.0.1", 00:18:36.446 "trsvcid": "54348" 00:18:36.446 }, 00:18:36.446 "auth": { 00:18:36.446 "state": "completed", 00:18:36.446 "digest": "sha512", 00:18:36.446 "dhgroup": "ffdhe2048" 00:18:36.446 } 00:18:36.446 } 00:18:36.446 ]' 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.446 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.704 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.704 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.704 05:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.704 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:36.704 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.270 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.271 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.530 05:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.789 00:18:37.789 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.789 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.789 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.047 { 00:18:38.047 "cntlid": 109, 00:18:38.047 "qid": 0, 00:18:38.047 "state": "enabled", 00:18:38.047 "thread": "nvmf_tgt_poll_group_000", 00:18:38.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:38.047 "listen_address": { 00:18:38.047 "trtype": "TCP", 00:18:38.047 "adrfam": "IPv4", 00:18:38.047 "traddr": "10.0.0.2", 00:18:38.047 "trsvcid": "4420" 00:18:38.047 }, 00:18:38.047 "peer_address": { 00:18:38.047 "trtype": "TCP", 00:18:38.047 "adrfam": "IPv4", 00:18:38.047 "traddr": "10.0.0.1", 00:18:38.047 "trsvcid": "54368" 00:18:38.047 }, 00:18:38.047 "auth": { 00:18:38.047 "state": "completed", 00:18:38.047 "digest": "sha512", 00:18:38.047 "dhgroup": "ffdhe2048" 00:18:38.047 } 00:18:38.047 } 00:18:38.047 ]' 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.047 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.048 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.307 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:38.307 05:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.897 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.155 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.414 00:18:39.414 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.414 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.414 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.672 { 00:18:39.672 "cntlid": 111, 00:18:39.672 "qid": 0, 00:18:39.672 "state": "enabled", 00:18:39.672 "thread": "nvmf_tgt_poll_group_000", 00:18:39.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:39.672 "listen_address": { 00:18:39.672 "trtype": "TCP", 00:18:39.672 "adrfam": "IPv4", 00:18:39.672 "traddr": "10.0.0.2", 00:18:39.672 "trsvcid": "4420" 00:18:39.672 }, 00:18:39.672 "peer_address": { 00:18:39.672 "trtype": "TCP", 00:18:39.672 "adrfam": "IPv4", 00:18:39.672 "traddr": "10.0.0.1", 00:18:39.672 "trsvcid": "54390" 00:18:39.672 }, 00:18:39.672 "auth": { 00:18:39.672 "state": "completed", 00:18:39.672 "digest": "sha512", 00:18:39.672 "dhgroup": "ffdhe2048" 00:18:39.672 } 00:18:39.672 } 00:18:39.672 ]' 00:18:39.672 05:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.672 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.672 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.673 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.673 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.673 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.673 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.673 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.931 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:39.931 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.511 05:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.770 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.028 00:18:41.028 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.028 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.028 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.286 { 00:18:41.286 "cntlid": 113, 00:18:41.286 "qid": 0, 00:18:41.286 "state": "enabled", 00:18:41.286 "thread": "nvmf_tgt_poll_group_000", 00:18:41.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:41.286 "listen_address": { 00:18:41.286 "trtype": "TCP", 00:18:41.286 "adrfam": "IPv4", 00:18:41.286 "traddr": "10.0.0.2", 00:18:41.286 "trsvcid": "4420" 00:18:41.286 }, 00:18:41.286 "peer_address": { 00:18:41.286 "trtype": "TCP", 00:18:41.286 "adrfam": "IPv4", 00:18:41.286 "traddr": "10.0.0.1", 00:18:41.286 "trsvcid": "42094" 00:18:41.286 }, 00:18:41.286 "auth": { 00:18:41.286 "state": "completed", 00:18:41.286 "digest": "sha512", 00:18:41.286 "dhgroup": "ffdhe3072" 00:18:41.286 } 00:18:41.286 } 00:18:41.286 ]' 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.286 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.544 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:41.544 05:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.110 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.369 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.628 00:18:42.628 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.628 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.628 05:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.886 { 00:18:42.886 "cntlid": 115, 00:18:42.886 "qid": 0, 00:18:42.886 "state": "enabled", 00:18:42.886 "thread": "nvmf_tgt_poll_group_000", 00:18:42.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:42.886 "listen_address": { 00:18:42.886 "trtype": "TCP", 00:18:42.886 "adrfam": "IPv4", 00:18:42.886 "traddr": "10.0.0.2", 00:18:42.886 "trsvcid": "4420" 00:18:42.886 }, 00:18:42.886 "peer_address": { 00:18:42.886 "trtype": "TCP", 00:18:42.886 "adrfam": "IPv4", 00:18:42.886 "traddr": "10.0.0.1", 00:18:42.886 "trsvcid": "42124" 00:18:42.886 }, 00:18:42.886 "auth": { 00:18:42.886 "state": "completed", 00:18:42.886 "digest": "sha512", 00:18:42.886 "dhgroup": "ffdhe3072" 00:18:42.886 } 00:18:42.886 } 00:18:42.886 ]' 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.886 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.887 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.145 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:43.145 05:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.713 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.972 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.230 00:18:44.230 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.230 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.230 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.489 { 00:18:44.489 "cntlid": 117, 00:18:44.489 "qid": 0, 00:18:44.489 "state": "enabled", 00:18:44.489 "thread": "nvmf_tgt_poll_group_000", 00:18:44.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:44.489 "listen_address": { 00:18:44.489 "trtype": "TCP", 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.2", 00:18:44.489 "trsvcid": "4420" 00:18:44.489 }, 00:18:44.489 "peer_address": { 00:18:44.489 "trtype": "TCP", 00:18:44.489 "adrfam": "IPv4", 00:18:44.489 "traddr": "10.0.0.1", 00:18:44.489 "trsvcid": "42140" 00:18:44.489 }, 00:18:44.489 "auth": { 00:18:44.489 "state": "completed", 00:18:44.489 "digest": "sha512", 00:18:44.489 "dhgroup": "ffdhe3072" 00:18:44.489 } 00:18:44.489 } 00:18:44.489 ]' 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.489 05:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.748 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:44.748 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.316 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.317 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.317 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.317 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:45.317 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.317 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.576 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.576 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.576 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.576 05:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.576 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.835 { 00:18:45.835 "cntlid": 119, 00:18:45.835 "qid": 0, 00:18:45.835 "state": "enabled", 00:18:45.835 "thread": "nvmf_tgt_poll_group_000", 00:18:45.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:45.835 "listen_address": { 00:18:45.835 "trtype": "TCP", 00:18:45.835 "adrfam": "IPv4", 00:18:45.835 "traddr": "10.0.0.2", 00:18:45.835 "trsvcid": "4420" 00:18:45.835 }, 00:18:45.835 "peer_address": { 00:18:45.835 "trtype": "TCP", 00:18:45.835 "adrfam": "IPv4", 00:18:45.835 "traddr": "10.0.0.1", 00:18:45.835 "trsvcid": "42170" 00:18:45.835 }, 00:18:45.835 "auth": { 00:18:45.835 "state": "completed", 00:18:45.835 "digest": "sha512", 00:18:45.835 "dhgroup": "ffdhe3072" 00:18:45.835 } 00:18:45.835 } 00:18:45.835 ]' 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.835 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.095 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.353 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:46.354 05:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:46.922 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.922 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:46.922 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.922 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.923 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.182 00:18:47.182 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.182 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.183 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.442 { 00:18:47.442 "cntlid": 121, 00:18:47.442 "qid": 0, 00:18:47.442 "state": "enabled", 00:18:47.442 "thread": "nvmf_tgt_poll_group_000", 00:18:47.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:47.442 "listen_address": { 00:18:47.442 "trtype": "TCP", 00:18:47.442 "adrfam": "IPv4", 00:18:47.442 "traddr": "10.0.0.2", 00:18:47.442 "trsvcid": "4420" 00:18:47.442 }, 00:18:47.442 "peer_address": { 00:18:47.442 "trtype": "TCP", 00:18:47.442 "adrfam": "IPv4", 00:18:47.442 "traddr": "10.0.0.1", 00:18:47.442 "trsvcid": "42194" 00:18:47.442 }, 00:18:47.442 "auth": { 00:18:47.442 "state": "completed", 00:18:47.442 "digest": "sha512", 00:18:47.442 "dhgroup": "ffdhe4096" 00:18:47.442 } 00:18:47.442 } 00:18:47.442 ]' 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.442 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.700 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.700 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.700 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.700 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.700 05:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.959 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:47.960 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.527 05:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.785 00:18:48.785 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.785 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.785 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.043 { 00:18:49.043 "cntlid": 123, 00:18:49.043 "qid": 0, 00:18:49.043 "state": "enabled", 00:18:49.043 "thread": "nvmf_tgt_poll_group_000", 00:18:49.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:49.043 "listen_address": { 00:18:49.043 "trtype": "TCP", 00:18:49.043 "adrfam": "IPv4", 00:18:49.043 "traddr": "10.0.0.2", 00:18:49.043 "trsvcid": "4420" 00:18:49.043 }, 00:18:49.043 "peer_address": { 00:18:49.043 "trtype": "TCP", 00:18:49.043 "adrfam": "IPv4", 00:18:49.043 "traddr": "10.0.0.1", 00:18:49.043 "trsvcid": "42232" 00:18:49.043 }, 00:18:49.043 "auth": { 00:18:49.043 "state": "completed", 00:18:49.043 "digest": "sha512", 00:18:49.043 "dhgroup": "ffdhe4096" 00:18:49.043 } 00:18:49.043 } 00:18:49.043 ]' 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.043 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.301 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.301 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.301 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.301 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.301 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.560 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:49.560 05:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.128 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.129 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.388 00:18:50.388 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.388 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.388 05:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.647 { 00:18:50.647 "cntlid": 125, 00:18:50.647 "qid": 0, 00:18:50.647 "state": "enabled", 00:18:50.647 "thread": "nvmf_tgt_poll_group_000", 00:18:50.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:50.647 "listen_address": { 00:18:50.647 "trtype": "TCP", 00:18:50.647 "adrfam": "IPv4", 00:18:50.647 "traddr": "10.0.0.2", 00:18:50.647 "trsvcid": "4420" 00:18:50.647 }, 00:18:50.647 "peer_address": { 00:18:50.647 "trtype": "TCP", 00:18:50.647 "adrfam": "IPv4", 00:18:50.647 "traddr": "10.0.0.1", 00:18:50.647 "trsvcid": "42250" 00:18:50.647 }, 00:18:50.647 "auth": { 00:18:50.647 "state": "completed", 00:18:50.647 "digest": "sha512", 00:18:50.647 "dhgroup": "ffdhe4096" 00:18:50.647 } 00:18:50.647 } 00:18:50.647 ]' 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.647 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.920 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.920 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.920 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.921 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:50.921 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.488 05:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.747 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:52.005 00:18:52.005 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.005 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.005 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.263 { 00:18:52.263 "cntlid": 127, 00:18:52.263 "qid": 0, 00:18:52.263 "state": "enabled", 00:18:52.263 "thread": "nvmf_tgt_poll_group_000", 00:18:52.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:52.263 "listen_address": { 00:18:52.263 "trtype": "TCP", 00:18:52.263 "adrfam": "IPv4", 00:18:52.263 "traddr": "10.0.0.2", 00:18:52.263 "trsvcid": "4420" 00:18:52.263 }, 00:18:52.263 "peer_address": { 00:18:52.263 "trtype": "TCP", 00:18:52.263 "adrfam": "IPv4", 00:18:52.263 "traddr": "10.0.0.1", 00:18:52.263 "trsvcid": "42560" 00:18:52.263 }, 00:18:52.263 "auth": { 00:18:52.263 "state": "completed", 00:18:52.263 "digest": "sha512", 00:18:52.263 "dhgroup": "ffdhe4096" 00:18:52.263 } 00:18:52.263 } 00:18:52.263 ]' 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.263 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.521 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.521 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.521 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.522 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:52.522 05:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.090 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.350 05:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.609 00:18:53.609 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.609 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.609 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.868 { 00:18:53.868 "cntlid": 129, 00:18:53.868 "qid": 0, 00:18:53.868 "state": "enabled", 00:18:53.868 "thread": "nvmf_tgt_poll_group_000", 00:18:53.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:53.868 "listen_address": { 00:18:53.868 "trtype": "TCP", 00:18:53.868 "adrfam": "IPv4", 00:18:53.868 "traddr": "10.0.0.2", 00:18:53.868 "trsvcid": "4420" 00:18:53.868 }, 00:18:53.868 "peer_address": { 00:18:53.868 "trtype": "TCP", 00:18:53.868 "adrfam": "IPv4", 00:18:53.868 "traddr": "10.0.0.1", 00:18:53.868 "trsvcid": "42580" 00:18:53.868 }, 00:18:53.868 "auth": { 00:18:53.868 "state": "completed", 00:18:53.868 "digest": "sha512", 00:18:53.868 "dhgroup": "ffdhe6144" 00:18:53.868 } 00:18:53.868 } 00:18:53.868 ]' 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.868 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:54.126 05:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.692 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.950 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.515 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.515 { 00:18:55.515 "cntlid": 131, 00:18:55.515 "qid": 0, 00:18:55.515 "state": "enabled", 00:18:55.515 "thread": "nvmf_tgt_poll_group_000", 00:18:55.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:55.515 "listen_address": { 00:18:55.515 "trtype": "TCP", 00:18:55.515 "adrfam": "IPv4", 00:18:55.515 "traddr": "10.0.0.2", 00:18:55.515 "trsvcid": "4420" 00:18:55.515 }, 00:18:55.515 "peer_address": { 00:18:55.515 "trtype": "TCP", 00:18:55.515 "adrfam": "IPv4", 00:18:55.515 "traddr": "10.0.0.1", 00:18:55.515 "trsvcid": "42608" 00:18:55.515 }, 00:18:55.515 "auth": { 00:18:55.515 "state": "completed", 00:18:55.515 "digest": "sha512", 00:18:55.515 "dhgroup": "ffdhe6144" 00:18:55.515 } 00:18:55.515 } 00:18:55.515 ]' 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.515 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.516 05:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.774 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.774 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.774 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.774 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.774 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.032 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:56.032 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.598 05:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.598 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:56.598 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.598 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.598 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.599 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.164 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.164 { 00:18:57.164 "cntlid": 133, 00:18:57.164 "qid": 0, 00:18:57.164 "state": "enabled", 00:18:57.164 "thread": "nvmf_tgt_poll_group_000", 00:18:57.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:57.164 "listen_address": { 00:18:57.164 "trtype": "TCP", 00:18:57.164 "adrfam": "IPv4", 00:18:57.164 "traddr": "10.0.0.2", 00:18:57.164 "trsvcid": "4420" 00:18:57.164 }, 00:18:57.164 "peer_address": { 00:18:57.164 "trtype": "TCP", 00:18:57.164 "adrfam": "IPv4", 00:18:57.164 "traddr": "10.0.0.1", 00:18:57.164 "trsvcid": "42654" 00:18:57.164 }, 00:18:57.164 "auth": { 00:18:57.164 "state": "completed", 00:18:57.164 "digest": "sha512", 00:18:57.164 "dhgroup": "ffdhe6144" 00:18:57.164 } 00:18:57.164 } 00:18:57.164 ]' 00:18:57.164 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.422 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.423 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.681 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:57.681 05:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.248 05:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.817 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.817 { 00:18:58.817 "cntlid": 135, 00:18:58.817 "qid": 0, 00:18:58.817 "state": "enabled", 00:18:58.817 "thread": "nvmf_tgt_poll_group_000", 00:18:58.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:18:58.817 "listen_address": { 00:18:58.817 "trtype": "TCP", 00:18:58.817 "adrfam": "IPv4", 00:18:58.817 "traddr": "10.0.0.2", 00:18:58.817 "trsvcid": "4420" 00:18:58.817 }, 00:18:58.817 "peer_address": { 00:18:58.817 "trtype": "TCP", 00:18:58.817 "adrfam": "IPv4", 00:18:58.817 "traddr": "10.0.0.1", 00:18:58.817 "trsvcid": "42690" 00:18:58.817 }, 00:18:58.817 "auth": { 00:18:58.817 "state": "completed", 00:18:58.817 "digest": "sha512", 00:18:58.817 "dhgroup": "ffdhe6144" 00:18:58.817 } 00:18:58.817 } 00:18:58.817 ]' 00:18:58.817 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.087 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.347 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:59.347 05:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.916 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.484 00:19:00.484 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.484 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.484 05:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.744 { 00:19:00.744 "cntlid": 137, 00:19:00.744 "qid": 0, 00:19:00.744 "state": "enabled", 00:19:00.744 "thread": "nvmf_tgt_poll_group_000", 00:19:00.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:00.744 "listen_address": { 00:19:00.744 "trtype": "TCP", 00:19:00.744 "adrfam": "IPv4", 00:19:00.744 "traddr": "10.0.0.2", 00:19:00.744 "trsvcid": "4420" 00:19:00.744 }, 00:19:00.744 "peer_address": { 00:19:00.744 "trtype": "TCP", 00:19:00.744 "adrfam": "IPv4", 00:19:00.744 "traddr": "10.0.0.1", 00:19:00.744 "trsvcid": "42730" 00:19:00.744 }, 00:19:00.744 "auth": { 00:19:00.744 "state": "completed", 00:19:00.744 "digest": "sha512", 00:19:00.744 "dhgroup": "ffdhe8192" 00:19:00.744 } 00:19:00.744 } 00:19:00.744 ]' 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.744 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.003 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:19:01.003 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.571 05:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.830 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.831 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.399 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.399 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.399 { 00:19:02.399 "cntlid": 139, 00:19:02.399 "qid": 0, 00:19:02.399 "state": "enabled", 00:19:02.399 "thread": "nvmf_tgt_poll_group_000", 00:19:02.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:02.399 "listen_address": { 00:19:02.399 "trtype": "TCP", 00:19:02.399 "adrfam": "IPv4", 00:19:02.399 "traddr": "10.0.0.2", 00:19:02.399 "trsvcid": "4420" 00:19:02.399 }, 00:19:02.399 "peer_address": { 00:19:02.399 "trtype": "TCP", 00:19:02.399 "adrfam": "IPv4", 00:19:02.399 "traddr": "10.0.0.1", 00:19:02.399 "trsvcid": "32908" 00:19:02.399 }, 00:19:02.399 "auth": { 00:19:02.399 "state": "completed", 00:19:02.399 "digest": "sha512", 00:19:02.399 "dhgroup": "ffdhe8192" 00:19:02.399 } 00:19:02.399 } 00:19:02.399 ]' 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.658 05:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.916 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:19:02.916 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: --dhchap-ctrl-secret DHHC-1:02:MmMxYzVjNzgxYjgyZjBkMGE1ODMxMTRlZDQwNWE1MzMwZGFkOTBhMDY1Njg5ZWVlzKGJ8g==: 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.483 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.741 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.741 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.741 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.741 05:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.000 00:19:04.000 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.000 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.000 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.258 { 00:19:04.258 "cntlid": 141, 00:19:04.258 "qid": 0, 00:19:04.258 "state": "enabled", 00:19:04.258 "thread": "nvmf_tgt_poll_group_000", 00:19:04.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:04.258 "listen_address": { 00:19:04.258 "trtype": "TCP", 00:19:04.258 "adrfam": "IPv4", 00:19:04.258 "traddr": "10.0.0.2", 00:19:04.258 "trsvcid": "4420" 00:19:04.258 }, 00:19:04.258 "peer_address": { 00:19:04.258 "trtype": "TCP", 00:19:04.258 "adrfam": "IPv4", 00:19:04.258 "traddr": "10.0.0.1", 00:19:04.258 "trsvcid": "32936" 00:19:04.258 }, 00:19:04.258 "auth": { 00:19:04.258 "state": "completed", 00:19:04.258 "digest": "sha512", 00:19:04.258 "dhgroup": "ffdhe8192" 00:19:04.258 } 00:19:04.258 } 00:19:04.258 ]' 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.258 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:19:04.517 05:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:01:MGNmOGIwN2I5NjFjNmQ2MGRiOGQ4N2FlOTI2MzEyYjGEiNVP: 00:19:05.083 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.083 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:05.083 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.084 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.084 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.084 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.084 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.084 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.342 05:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.909 00:19:05.909 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.909 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.909 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.167 { 00:19:06.167 "cntlid": 143, 00:19:06.167 "qid": 0, 00:19:06.167 "state": "enabled", 00:19:06.167 "thread": "nvmf_tgt_poll_group_000", 00:19:06.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:06.167 "listen_address": { 00:19:06.167 "trtype": "TCP", 00:19:06.167 "adrfam": "IPv4", 00:19:06.167 "traddr": "10.0.0.2", 00:19:06.167 "trsvcid": "4420" 00:19:06.167 }, 00:19:06.167 "peer_address": { 00:19:06.167 "trtype": "TCP", 00:19:06.167 "adrfam": "IPv4", 00:19:06.167 "traddr": "10.0.0.1", 00:19:06.167 "trsvcid": "32974" 00:19:06.167 }, 00:19:06.167 "auth": { 00:19:06.167 "state": "completed", 00:19:06.167 "digest": "sha512", 00:19:06.167 "dhgroup": "ffdhe8192" 00:19:06.167 } 00:19:06.167 } 00:19:06.167 ]' 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.167 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.425 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:06.425 05:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.035 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.317 05:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.926 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.926 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.926 { 00:19:07.926 "cntlid": 145, 00:19:07.926 "qid": 0, 00:19:07.926 "state": "enabled", 00:19:07.926 "thread": "nvmf_tgt_poll_group_000", 00:19:07.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:07.926 "listen_address": { 00:19:07.926 "trtype": "TCP", 00:19:07.926 "adrfam": "IPv4", 00:19:07.926 "traddr": "10.0.0.2", 00:19:07.926 "trsvcid": "4420" 00:19:07.926 }, 00:19:07.926 "peer_address": { 00:19:07.926 "trtype": "TCP", 00:19:07.926 "adrfam": "IPv4", 00:19:07.926 "traddr": "10.0.0.1", 00:19:07.926 "trsvcid": "33004" 00:19:07.926 }, 00:19:07.926 "auth": { 00:19:07.926 "state": "completed", 00:19:07.926 "digest": "sha512", 00:19:07.926 "dhgroup": "ffdhe8192" 00:19:07.926 } 00:19:07.926 } 00:19:07.927 ]' 00:19:07.927 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.927 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.927 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:19:08.212 05:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQzMjhiYjU1OTA1Y2FlNDQ0NWUwM2JiMzMyZDMxNzcyNTlmNDI5MGI0M2EyZDU1Gz+wzw==: --dhchap-ctrl-secret DHHC-1:03:ZTY4M2I4NWRjMWExYWI1YjhmYzA0ZGMxNDc0YzEzMTA0NmI3NjMyZWVkODBjMmYwNjhhYzg3MDFhNmYxOGI5MtTjT4A=: 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:08.890 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:08.891 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:09.174 request: 00:19:09.174 { 00:19:09.174 "name": "nvme0", 00:19:09.174 "trtype": "tcp", 00:19:09.174 "traddr": "10.0.0.2", 00:19:09.174 "adrfam": "ipv4", 00:19:09.174 "trsvcid": "4420", 00:19:09.174 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:09.174 "prchk_reftag": false, 00:19:09.174 "prchk_guard": false, 00:19:09.174 "hdgst": false, 00:19:09.174 "ddgst": false, 00:19:09.174 "dhchap_key": "key2", 00:19:09.174 "allow_unrecognized_csi": false, 00:19:09.175 "method": "bdev_nvme_attach_controller", 00:19:09.175 "req_id": 1 00:19:09.175 } 00:19:09.175 Got JSON-RPC error response 00:19:09.175 response: 00:19:09.175 { 00:19:09.175 "code": -5, 00:19:09.175 "message": "Input/output error" 00:19:09.175 } 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.434 05:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.693 request: 00:19:09.693 { 00:19:09.693 "name": "nvme0", 00:19:09.693 "trtype": "tcp", 00:19:09.693 "traddr": "10.0.0.2", 00:19:09.693 "adrfam": "ipv4", 00:19:09.693 "trsvcid": "4420", 00:19:09.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:09.693 "prchk_reftag": false, 00:19:09.693 "prchk_guard": false, 00:19:09.693 "hdgst": false, 00:19:09.693 "ddgst": false, 00:19:09.693 "dhchap_key": "key1", 00:19:09.693 "dhchap_ctrlr_key": "ckey2", 00:19:09.693 "allow_unrecognized_csi": false, 00:19:09.693 "method": "bdev_nvme_attach_controller", 00:19:09.693 "req_id": 1 00:19:09.693 } 00:19:09.693 Got JSON-RPC error response 00:19:09.693 response: 00:19:09.693 { 00:19:09.693 "code": -5, 00:19:09.693 "message": "Input/output error" 00:19:09.693 } 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.693 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.262 request: 00:19:10.263 { 00:19:10.263 "name": "nvme0", 00:19:10.263 "trtype": "tcp", 00:19:10.263 "traddr": "10.0.0.2", 00:19:10.263 "adrfam": "ipv4", 00:19:10.263 "trsvcid": "4420", 00:19:10.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:10.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:10.263 "prchk_reftag": false, 00:19:10.263 "prchk_guard": false, 00:19:10.263 "hdgst": false, 00:19:10.263 "ddgst": false, 00:19:10.263 "dhchap_key": "key1", 00:19:10.263 "dhchap_ctrlr_key": "ckey1", 00:19:10.263 "allow_unrecognized_csi": false, 00:19:10.263 "method": "bdev_nvme_attach_controller", 00:19:10.263 "req_id": 1 00:19:10.263 } 00:19:10.263 Got JSON-RPC error response 00:19:10.263 response: 00:19:10.263 { 00:19:10.263 "code": -5, 00:19:10.263 "message": "Input/output error" 00:19:10.263 } 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 465982 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 465982 ']' 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 465982 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465982 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465982' 00:19:10.263 killing process with pid 465982 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 465982 00:19:10.263 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 465982 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=489350 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 489350 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 489350 ']' 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.522 05:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 489350 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 489350 ']' 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.781 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.040 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.040 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:11.040 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:11.040 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 null0 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pOy 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.njS ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.njS 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GNZ 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.fZC ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fZC 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lbO 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.vxO ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vxO 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dqj 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.041 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.300 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.300 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:11.300 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.301 05:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.869 nvme0n1 00:19:11.869 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.869 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.869 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.128 { 00:19:12.128 "cntlid": 1, 00:19:12.128 "qid": 0, 00:19:12.128 "state": "enabled", 00:19:12.128 "thread": "nvmf_tgt_poll_group_000", 00:19:12.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:12.128 "listen_address": { 00:19:12.128 "trtype": "TCP", 00:19:12.128 "adrfam": "IPv4", 00:19:12.128 "traddr": "10.0.0.2", 00:19:12.128 "trsvcid": "4420" 00:19:12.128 }, 00:19:12.128 "peer_address": { 00:19:12.128 "trtype": "TCP", 00:19:12.128 "adrfam": "IPv4", 00:19:12.128 "traddr": "10.0.0.1", 00:19:12.128 "trsvcid": "53528" 00:19:12.128 }, 00:19:12.128 "auth": { 00:19:12.128 "state": "completed", 00:19:12.128 "digest": "sha512", 00:19:12.128 "dhgroup": "ffdhe8192" 00:19:12.128 } 00:19:12.128 } 00:19:12.128 ]' 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.128 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.385 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:12.385 05:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:12.952 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.210 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.467 request: 00:19:13.467 { 00:19:13.467 "name": "nvme0", 00:19:13.467 "trtype": "tcp", 00:19:13.467 "traddr": "10.0.0.2", 00:19:13.467 "adrfam": "ipv4", 00:19:13.467 "trsvcid": "4420", 00:19:13.467 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:13.467 "prchk_reftag": false, 00:19:13.467 "prchk_guard": false, 00:19:13.467 "hdgst": false, 00:19:13.467 "ddgst": false, 00:19:13.467 "dhchap_key": "key3", 00:19:13.467 "allow_unrecognized_csi": false, 00:19:13.467 "method": "bdev_nvme_attach_controller", 00:19:13.467 "req_id": 1 00:19:13.467 } 00:19:13.468 Got JSON-RPC error response 00:19:13.468 response: 00:19:13.468 { 00:19:13.468 "code": -5, 00:19:13.468 "message": "Input/output error" 00:19:13.468 } 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.468 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.726 05:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.726 request: 00:19:13.726 { 00:19:13.726 "name": "nvme0", 00:19:13.726 "trtype": "tcp", 00:19:13.726 "traddr": "10.0.0.2", 00:19:13.726 "adrfam": "ipv4", 00:19:13.726 "trsvcid": "4420", 00:19:13.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:13.726 "prchk_reftag": false, 00:19:13.726 "prchk_guard": false, 00:19:13.726 "hdgst": false, 00:19:13.726 "ddgst": false, 00:19:13.726 "dhchap_key": "key3", 00:19:13.726 "allow_unrecognized_csi": false, 00:19:13.726 "method": "bdev_nvme_attach_controller", 00:19:13.726 "req_id": 1 00:19:13.726 } 00:19:13.726 Got JSON-RPC error response 00:19:13.726 response: 00:19:13.726 { 00:19:13.726 "code": -5, 00:19:13.726 "message": "Input/output error" 00:19:13.726 } 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.726 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.985 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.244 request: 00:19:14.244 { 00:19:14.244 "name": "nvme0", 00:19:14.244 "trtype": "tcp", 00:19:14.244 "traddr": "10.0.0.2", 00:19:14.244 "adrfam": "ipv4", 00:19:14.244 "trsvcid": "4420", 00:19:14.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:14.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:14.244 "prchk_reftag": false, 00:19:14.244 "prchk_guard": false, 00:19:14.244 "hdgst": false, 00:19:14.244 "ddgst": false, 00:19:14.244 "dhchap_key": "key0", 00:19:14.244 "dhchap_ctrlr_key": "key1", 00:19:14.244 "allow_unrecognized_csi": false, 00:19:14.244 "method": "bdev_nvme_attach_controller", 00:19:14.244 "req_id": 1 00:19:14.244 } 00:19:14.244 Got JSON-RPC error response 00:19:14.244 response: 00:19:14.244 { 00:19:14.244 "code": -5, 00:19:14.244 "message": "Input/output error" 00:19:14.244 } 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.502 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.502 nvme0n1 00:19:14.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:14.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:14.760 05:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.760 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.760 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.760 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:15.019 05:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:15.956 nvme0n1 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.956 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:16.215 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.215 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:16.215 05:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: --dhchap-ctrl-secret DHHC-1:03:YjNmN2U1MTVjZDIwYzZiOWE1MTAyZGRkZWU3MWNjZjA0YTJhN2Q3ZWU3M2I1MTBmYzg2NGNmYmVjY2E2MTUyMKrvN9E=: 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:16.784 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:17.352 request: 00:19:17.352 { 00:19:17.352 "name": "nvme0", 00:19:17.352 "trtype": "tcp", 00:19:17.352 "traddr": "10.0.0.2", 00:19:17.352 "adrfam": "ipv4", 00:19:17.352 "trsvcid": "4420", 00:19:17.352 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:19:17.352 "prchk_reftag": false, 00:19:17.352 "prchk_guard": false, 00:19:17.352 "hdgst": false, 00:19:17.352 "ddgst": false, 00:19:17.352 "dhchap_key": "key1", 00:19:17.352 "allow_unrecognized_csi": false, 00:19:17.352 "method": "bdev_nvme_attach_controller", 00:19:17.352 "req_id": 1 00:19:17.352 } 00:19:17.352 Got JSON-RPC error response 00:19:17.352 response: 00:19:17.352 { 00:19:17.352 "code": -5, 00:19:17.352 "message": "Input/output error" 00:19:17.352 } 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.352 05:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.921 nvme0n1 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.180 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.439 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:18.439 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.440 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.440 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.440 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:18.440 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.440 05:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.698 nvme0n1 00:19:18.699 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:18.699 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:18.699 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.958 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.958 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.958 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: '' 2s 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:19.217 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: ]] 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MmFhODBiN2E4NmVjMDZiNGIzYWMxYjUyZTE2OTIzYjgvotHz: 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:19.218 05:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: 2s 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: ]] 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDNhZGVmMDExY2M4NDJlMjNiYWJmYWMzNTE5NWY5ODk3YjI2ZjFhMWZkNGQ1NWVhoEzPPQ==: 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:21.123 05:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.655 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.656 05:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.915 nvme0n1 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.915 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.482 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:24.482 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:24.482 05:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:24.742 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.001 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.569 request: 00:19:25.569 { 00:19:25.569 "name": "nvme0", 00:19:25.569 "dhchap_key": "key1", 00:19:25.569 "dhchap_ctrlr_key": "key3", 00:19:25.569 "method": "bdev_nvme_set_keys", 00:19:25.569 "req_id": 1 00:19:25.569 } 00:19:25.569 Got JSON-RPC error response 00:19:25.569 response: 00:19:25.569 { 00:19:25.569 "code": -13, 00:19:25.569 "message": "Permission denied" 00:19:25.569 } 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:25.569 05:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.828 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:25.828 05:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:26.765 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:26.765 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:26.765 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.024 05:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.592 nvme0n1 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.592 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:27.593 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.593 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.593 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:28.161 request: 00:19:28.161 { 00:19:28.161 "name": "nvme0", 00:19:28.161 "dhchap_key": "key2", 00:19:28.161 "dhchap_ctrlr_key": "key0", 00:19:28.161 "method": "bdev_nvme_set_keys", 00:19:28.161 "req_id": 1 00:19:28.161 } 00:19:28.161 Got JSON-RPC error response 00:19:28.161 response: 00:19:28.161 { 00:19:28.161 "code": -13, 00:19:28.161 "message": "Permission denied" 00:19:28.161 } 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:28.161 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.421 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:28.421 05:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:29.359 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:29.359 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:29.359 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 466261 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 466261 ']' 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 466261 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466261 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466261' 00:19:29.618 killing process with pid 466261 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 466261 00:19:29.618 05:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 466261 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.877 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.877 rmmod nvme_tcp 00:19:29.877 rmmod nvme_fabrics 00:19:30.136 rmmod nvme_keyring 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 489350 ']' 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 489350 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 489350 ']' 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 489350 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489350 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489350' 00:19:30.136 killing process with pid 489350 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 489350 00:19:30.136 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 489350 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:30.395 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.396 05:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.299 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.299 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pOy /tmp/spdk.key-sha256.GNZ /tmp/spdk.key-sha384.lbO /tmp/spdk.key-sha512.dqj /tmp/spdk.key-sha512.njS /tmp/spdk.key-sha384.fZC /tmp/spdk.key-sha256.vxO '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:32.299 00:19:32.299 real 2m35.954s 00:19:32.299 user 5m48.699s 00:19:32.299 sys 0m32.343s 00:19:32.299 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.299 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.299 ************************************ 00:19:32.299 END TEST nvmf_auth_target 00:19:32.299 ************************************ 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.559 ************************************ 00:19:32.559 START TEST nvmf_bdevio_no_huge 00:19:32.559 ************************************ 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.559 * Looking for test storage... 00:19:32.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:32.559 05:14:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:32.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.559 --rc genhtml_branch_coverage=1 00:19:32.559 --rc genhtml_function_coverage=1 00:19:32.559 --rc genhtml_legend=1 00:19:32.559 --rc geninfo_all_blocks=1 00:19:32.559 --rc geninfo_unexecuted_blocks=1 00:19:32.559 00:19:32.559 ' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:32.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.559 --rc genhtml_branch_coverage=1 00:19:32.559 --rc genhtml_function_coverage=1 00:19:32.559 --rc genhtml_legend=1 00:19:32.559 --rc geninfo_all_blocks=1 00:19:32.559 --rc geninfo_unexecuted_blocks=1 00:19:32.559 00:19:32.559 ' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:32.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.559 --rc genhtml_branch_coverage=1 00:19:32.559 --rc genhtml_function_coverage=1 00:19:32.559 --rc genhtml_legend=1 00:19:32.559 --rc geninfo_all_blocks=1 00:19:32.559 --rc geninfo_unexecuted_blocks=1 00:19:32.559 00:19:32.559 ' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:32.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.559 --rc genhtml_branch_coverage=1 00:19:32.559 --rc genhtml_function_coverage=1 00:19:32.559 --rc genhtml_legend=1 00:19:32.559 --rc geninfo_all_blocks=1 00:19:32.559 --rc geninfo_unexecuted_blocks=1 00:19:32.559 00:19:32.559 ' 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.559 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.825 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.826 05:14:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:40.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:40.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:40.950 Found net devices under 0000:af:00.0: cvl_0_0 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:40.950 Found net devices under 0000:af:00.1: cvl_0_1 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.950 05:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.950 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:19:40.951 00:19:40.951 --- 10.0.0.2 ping statistics --- 00:19:40.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.951 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:19:40.951 00:19:40.951 --- 10.0.0.1 ping statistics --- 00:19:40.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.951 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=496435 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 496435 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 496435 ']' 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.951 05:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 [2024-12-09 05:14:22.400855] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:19:40.951 [2024-12-09 05:14:22.400915] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:40.951 [2024-12-09 05:14:22.509418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.951 [2024-12-09 05:14:22.564229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.951 [2024-12-09 05:14:22.564262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.951 [2024-12-09 05:14:22.564272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.951 [2024-12-09 05:14:22.564280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.951 [2024-12-09 05:14:22.564287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.951 [2024-12-09 05:14:22.565692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.951 [2024-12-09 05:14:22.565778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:40.951 [2024-12-09 05:14:22.565890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.951 [2024-12-09 05:14:22.565890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 [2024-12-09 05:14:23.296158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 Malloc0 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.951 [2024-12-09 05:14:23.341181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:40.951 { 00:19:40.951 "params": { 00:19:40.951 "name": "Nvme$subsystem", 00:19:40.951 "trtype": "$TEST_TRANSPORT", 00:19:40.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.951 "adrfam": "ipv4", 00:19:40.951 "trsvcid": "$NVMF_PORT", 00:19:40.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.951 "hdgst": ${hdgst:-false}, 00:19:40.951 "ddgst": ${ddgst:-false} 00:19:40.951 }, 00:19:40.951 "method": "bdev_nvme_attach_controller" 00:19:40.951 } 00:19:40.951 EOF 00:19:40.951 )") 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:40.951 05:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:40.951 "params": { 00:19:40.951 "name": "Nvme1", 00:19:40.951 "trtype": "tcp", 00:19:40.951 "traddr": "10.0.0.2", 00:19:40.951 "adrfam": "ipv4", 00:19:40.951 "trsvcid": "4420", 00:19:40.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.951 "hdgst": false, 00:19:40.951 "ddgst": false 00:19:40.951 }, 00:19:40.951 "method": "bdev_nvme_attach_controller" 00:19:40.951 }' 00:19:40.951 [2024-12-09 05:14:23.396454] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:19:40.951 [2024-12-09 05:14:23.396504] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid496671 ] 00:19:41.210 [2024-12-09 05:14:23.492342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.210 [2024-12-09 05:14:23.550189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.210 [2024-12-09 05:14:23.550297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.210 [2024-12-09 05:14:23.550298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.470 I/O targets: 00:19:41.470 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:41.470 00:19:41.470 00:19:41.470 CUnit - A unit testing framework for C - Version 2.1-3 00:19:41.470 http://cunit.sourceforge.net/ 00:19:41.470 00:19:41.470 00:19:41.470 Suite: bdevio tests on: Nvme1n1 00:19:41.470 Test: blockdev write read block ...passed 00:19:41.470 Test: blockdev write zeroes read block ...passed 00:19:41.470 Test: blockdev write zeroes read no split ...passed 00:19:41.470 Test: blockdev write zeroes read split ...passed 00:19:41.470 Test: blockdev write zeroes read split partial ...passed 00:19:41.470 Test: blockdev reset ...[2024-12-09 05:14:23.869234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:41.470 [2024-12-09 05:14:23.869298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c59ad0 (9): Bad file descriptor 00:19:41.728 [2024-12-09 05:14:24.018746] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:41.728 passed 00:19:41.728 Test: blockdev write read 8 blocks ...passed 00:19:41.728 Test: blockdev write read size > 128k ...passed 00:19:41.728 Test: blockdev write read invalid size ...passed 00:19:41.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:41.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:41.728 Test: blockdev write read max offset ...passed 00:19:41.988 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:41.988 Test: blockdev writev readv 8 blocks ...passed 00:19:41.988 Test: blockdev writev readv 30 x 1block ...passed 00:19:41.988 Test: blockdev writev readv block ...passed 00:19:41.988 Test: blockdev writev readv size > 128k ...passed 00:19:41.988 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:41.988 Test: blockdev comparev and writev ...[2024-12-09 05:14:24.269116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.269986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-09 05:14:24.269995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:41.988 passed 00:19:41.988 Test: blockdev nvme passthru rw ...passed 00:19:41.988 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:14:24.351614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.988 [2024-12-09 05:14:24.351632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.351746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.988 [2024-12-09 05:14:24.351757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.351863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.988 [2024-12-09 05:14:24.351874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:41.988 [2024-12-09 05:14:24.351981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.988 [2024-12-09 05:14:24.351992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:41.988 passed 00:19:41.988 Test: blockdev nvme admin passthru ...passed 00:19:41.988 Test: blockdev copy ...passed 00:19:41.988 00:19:41.988 Run Summary: Type Total Ran Passed Failed Inactive 00:19:41.988 suites 1 1 n/a 0 0 00:19:41.988 tests 23 23 23 0 0 00:19:41.988 asserts 152 152 152 0 n/a 00:19:41.988 00:19:41.988 Elapsed time = 1.357 seconds 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.556 rmmod nvme_tcp 00:19:42.556 rmmod nvme_fabrics 00:19:42.556 rmmod nvme_keyring 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 496435 ']' 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 496435 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 496435 ']' 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 496435 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 496435 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 496435' 00:19:42.556 killing process with pid 496435 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 496435 00:19:42.556 05:14:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 496435 00:19:42.814 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.814 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.814 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.814 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.073 05:14:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.978 00:19:44.978 real 0m12.546s 00:19:44.978 user 0m14.883s 00:19:44.978 sys 0m6.838s 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:44.978 ************************************ 00:19:44.978 END TEST nvmf_bdevio_no_huge 00:19:44.978 ************************************ 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.978 05:14:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.250 ************************************ 00:19:45.250 START TEST nvmf_tls 00:19:45.250 ************************************ 00:19:45.250 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:45.250 * Looking for test storage... 00:19:45.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.250 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.250 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.250 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.250 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.251 --rc genhtml_branch_coverage=1 00:19:45.251 --rc genhtml_function_coverage=1 00:19:45.251 --rc genhtml_legend=1 00:19:45.251 --rc geninfo_all_blocks=1 00:19:45.251 --rc geninfo_unexecuted_blocks=1 00:19:45.251 00:19:45.251 ' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.251 --rc genhtml_branch_coverage=1 00:19:45.251 --rc genhtml_function_coverage=1 00:19:45.251 --rc genhtml_legend=1 00:19:45.251 --rc geninfo_all_blocks=1 00:19:45.251 --rc geninfo_unexecuted_blocks=1 00:19:45.251 00:19:45.251 ' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.251 --rc genhtml_branch_coverage=1 00:19:45.251 --rc genhtml_function_coverage=1 00:19:45.251 --rc genhtml_legend=1 00:19:45.251 --rc geninfo_all_blocks=1 00:19:45.251 --rc geninfo_unexecuted_blocks=1 00:19:45.251 00:19:45.251 ' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.251 --rc genhtml_branch_coverage=1 00:19:45.251 --rc genhtml_function_coverage=1 00:19:45.251 --rc genhtml_legend=1 00:19:45.251 --rc geninfo_all_blocks=1 00:19:45.251 --rc geninfo_unexecuted_blocks=1 00:19:45.251 00:19:45.251 ' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.251 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.252 05:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.376 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.376 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.376 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:53.377 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:53.377 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:53.377 Found net devices under 0000:af:00.0: cvl_0_0 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:53.377 Found net devices under 0000:af:00.1: cvl_0_1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:53.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:19:53.377 00:19:53.377 --- 10.0.0.2 ping statistics --- 00:19:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.377 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:19:53.377 00:19:53.377 --- 10.0.0.1 ping statistics --- 00:19:53.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.377 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.377 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=500645 00:19:53.378 05:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 500645 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 500645 ']' 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.378 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.378 [2024-12-09 05:14:35.053452] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:19:53.378 [2024-12-09 05:14:35.053500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.378 [2024-12-09 05:14:35.153966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.378 [2024-12-09 05:14:35.194975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.378 [2024-12-09 05:14:35.195009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.378 [2024-12-09 05:14:35.195019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.378 [2024-12-09 05:14:35.195027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.378 [2024-12-09 05:14:35.195034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.378 [2024-12-09 05:14:35.195595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:53.637 05:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:53.637 true 00:19:53.896 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.896 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:53.896 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:53.896 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:53.896 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:54.155 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.155 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:54.413 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:54.413 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:54.413 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:54.413 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.413 05:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:54.672 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:54.672 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:54.672 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.672 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:54.931 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:54.931 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:54.931 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:55.190 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:55.191 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:55.191 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:55.191 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:55.191 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:55.449 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:55.449 05:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.YdpNxYedeR 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gadTKxuAJk 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.708 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:55.709 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YdpNxYedeR 00:19:55.709 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gadTKxuAJk 00:19:55.709 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:55.968 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:56.227 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.YdpNxYedeR 00:19:56.227 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YdpNxYedeR 00:19:56.227 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.486 [2024-12-09 05:14:38.731781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.486 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.486 05:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.745 [2024-12-09 05:14:39.080666] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.745 [2024-12-09 05:14:39.080890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.745 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.003 malloc0 00:19:57.003 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.003 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YdpNxYedeR 00:19:57.262 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.521 05:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YdpNxYedeR 00:20:07.501 Initializing NVMe Controllers 00:20:07.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.501 Initialization complete. Launching workers. 00:20:07.501 ======================================================== 00:20:07.501 Latency(us) 00:20:07.501 Device Information : IOPS MiB/s Average min max 00:20:07.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16224.25 63.38 3944.85 764.99 5971.02 00:20:07.501 ======================================================== 00:20:07.501 Total : 16224.25 63.38 3944.85 764.99 5971.02 00:20:07.501 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YdpNxYedeR 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YdpNxYedeR 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=503349 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 503349 /var/tmp/bdevperf.sock 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 503349 ']' 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.761 05:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.761 [2024-12-09 05:14:50.040869] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:07.761 [2024-12-09 05:14:50.040925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503349 ] 00:20:07.761 [2024-12-09 05:14:50.133265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.761 [2024-12-09 05:14:50.172516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.699 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.699 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.699 05:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YdpNxYedeR 00:20:08.699 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.959 [2024-12-09 05:14:51.222842] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.959 TLSTESTn1 00:20:08.959 05:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:08.959 Running I/O for 10 seconds... 00:20:11.274 4776.00 IOPS, 18.66 MiB/s [2024-12-09T04:14:54.680Z] 4908.50 IOPS, 19.17 MiB/s [2024-12-09T04:14:55.615Z] 5051.00 IOPS, 19.73 MiB/s [2024-12-09T04:14:56.551Z] 5104.50 IOPS, 19.94 MiB/s [2024-12-09T04:14:57.485Z] 5069.80 IOPS, 19.80 MiB/s [2024-12-09T04:14:58.421Z] 5066.00 IOPS, 19.79 MiB/s [2024-12-09T04:14:59.795Z] 5118.43 IOPS, 19.99 MiB/s [2024-12-09T04:15:00.731Z] 5155.62 IOPS, 20.14 MiB/s [2024-12-09T04:15:01.668Z] 5171.33 IOPS, 20.20 MiB/s [2024-12-09T04:15:01.668Z] 5180.00 IOPS, 20.23 MiB/s 00:20:19.199 Latency(us) 00:20:19.199 [2024-12-09T04:15:01.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.199 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.199 Verification LBA range: start 0x0 length 0x2000 00:20:19.199 TLSTESTn1 : 10.01 5185.47 20.26 0.00 0.00 24650.75 5373.95 63333.99 00:20:19.199 [2024-12-09T04:15:01.669Z] =================================================================================================================== 00:20:19.199 [2024-12-09T04:15:01.669Z] Total : 5185.47 20.26 0.00 0.00 24650.75 5373.95 63333.99 00:20:19.199 { 00:20:19.199 "results": [ 00:20:19.199 { 00:20:19.199 "job": "TLSTESTn1", 00:20:19.199 "core_mask": "0x4", 00:20:19.199 "workload": "verify", 00:20:19.199 "status": "finished", 00:20:19.199 "verify_range": { 00:20:19.199 "start": 0, 00:20:19.199 "length": 8192 00:20:19.199 }, 00:20:19.199 "queue_depth": 128, 00:20:19.199 "io_size": 4096, 00:20:19.199 "runtime": 10.014139, 00:20:19.199 "iops": 5185.468266418112, 00:20:19.199 "mibps": 20.25573541569575, 00:20:19.199 "io_failed": 0, 00:20:19.199 "io_timeout": 0, 00:20:19.199 "avg_latency_us": 24650.74824538592, 00:20:19.199 "min_latency_us": 5373.952, 00:20:19.199 "max_latency_us": 63333.9904 00:20:19.199 } 00:20:19.199 ], 00:20:19.199 "core_count": 1 00:20:19.199 } 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 503349 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 503349 ']' 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 503349 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 503349 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 503349' 00:20:19.199 killing process with pid 503349 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 503349 00:20:19.199 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.199 00:20:19.199 Latency(us) 00:20:19.199 [2024-12-09T04:15:01.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.199 [2024-12-09T04:15:01.669Z] =================================================================================================================== 00:20:19.199 [2024-12-09T04:15:01.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.199 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 503349 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gadTKxuAJk 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gadTKxuAJk 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gadTKxuAJk 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gadTKxuAJk 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=505335 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 505335 /var/tmp/bdevperf.sock 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 505335 ']' 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.459 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.459 [2024-12-09 05:15:01.760490] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:19.459 [2024-12-09 05:15:01.760541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505335 ] 00:20:19.459 [2024-12-09 05:15:01.842704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.459 [2024-12-09 05:15:01.882807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.719 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.719 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.719 05:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gadTKxuAJk 00:20:19.719 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.979 [2024-12-09 05:15:02.319336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.979 [2024-12-09 05:15:02.323878] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.979 [2024-12-09 05:15:02.324494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de700 (107): Transport endpoint is not connected 00:20:19.979 [2024-12-09 05:15:02.325485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18de700 (9): Bad file descriptor 00:20:19.979 [2024-12-09 05:15:02.326487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:19.979 [2024-12-09 05:15:02.326500] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.979 [2024-12-09 05:15:02.326509] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:19.979 [2024-12-09 05:15:02.326523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:19.979 request: 00:20:19.979 { 00:20:19.979 "name": "TLSTEST", 00:20:19.979 "trtype": "tcp", 00:20:19.979 "traddr": "10.0.0.2", 00:20:19.979 "adrfam": "ipv4", 00:20:19.979 "trsvcid": "4420", 00:20:19.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.979 "prchk_reftag": false, 00:20:19.979 "prchk_guard": false, 00:20:19.979 "hdgst": false, 00:20:19.979 "ddgst": false, 00:20:19.979 "psk": "key0", 00:20:19.979 "allow_unrecognized_csi": false, 00:20:19.979 "method": "bdev_nvme_attach_controller", 00:20:19.979 "req_id": 1 00:20:19.979 } 00:20:19.979 Got JSON-RPC error response 00:20:19.979 response: 00:20:19.979 { 00:20:19.979 "code": -5, 00:20:19.979 "message": "Input/output error" 00:20:19.979 } 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 505335 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 505335 ']' 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 505335 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 505335 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 505335' 00:20:19.979 killing process with pid 505335 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 505335 00:20:19.979 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.979 00:20:19.979 Latency(us) 00:20:19.979 [2024-12-09T04:15:02.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.979 [2024-12-09T04:15:02.449Z] =================================================================================================================== 00:20:19.979 [2024-12-09T04:15:02.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.979 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 505335 00:20:20.239 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:20.239 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:20.239 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YdpNxYedeR 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YdpNxYedeR 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YdpNxYedeR 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YdpNxYedeR 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=505609 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 505609 /var/tmp/bdevperf.sock 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 505609 ']' 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.240 05:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.240 [2024-12-09 05:15:02.649518] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:20.240 [2024-12-09 05:15:02.649588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505609 ] 00:20:20.499 [2024-12-09 05:15:02.735888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.499 [2024-12-09 05:15:02.774493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.066 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.066 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.066 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YdpNxYedeR 00:20:21.325 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:21.585 [2024-12-09 05:15:03.829368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.585 [2024-12-09 05:15:03.837368] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:21.585 [2024-12-09 05:15:03.837394] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:21.585 [2024-12-09 05:15:03.837420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.585 [2024-12-09 05:15:03.837528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2361700 (107): Transport endpoint is not connected 00:20:21.585 [2024-12-09 05:15:03.838514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2361700 (9): Bad file descriptor 00:20:21.585 [2024-12-09 05:15:03.839515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:21.585 [2024-12-09 05:15:03.839528] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.585 [2024-12-09 05:15:03.839541] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:21.585 [2024-12-09 05:15:03.839551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:21.585 request: 00:20:21.585 { 00:20:21.585 "name": "TLSTEST", 00:20:21.585 "trtype": "tcp", 00:20:21.585 "traddr": "10.0.0.2", 00:20:21.585 "adrfam": "ipv4", 00:20:21.585 "trsvcid": "4420", 00:20:21.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:21.585 "prchk_reftag": false, 00:20:21.585 "prchk_guard": false, 00:20:21.585 "hdgst": false, 00:20:21.585 "ddgst": false, 00:20:21.585 "psk": "key0", 00:20:21.585 "allow_unrecognized_csi": false, 00:20:21.585 "method": "bdev_nvme_attach_controller", 00:20:21.585 "req_id": 1 00:20:21.585 } 00:20:21.585 Got JSON-RPC error response 00:20:21.585 response: 00:20:21.585 { 00:20:21.585 "code": -5, 00:20:21.585 "message": "Input/output error" 00:20:21.585 } 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 505609 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 505609 ']' 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 505609 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 505609 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 505609' 00:20:21.585 killing process with pid 505609 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 505609 00:20:21.585 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.585 00:20:21.585 Latency(us) 00:20:21.585 [2024-12-09T04:15:04.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.585 [2024-12-09T04:15:04.055Z] =================================================================================================================== 00:20:21.585 [2024-12-09T04:15:04.055Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.585 05:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 505609 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YdpNxYedeR 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YdpNxYedeR 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YdpNxYedeR 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YdpNxYedeR 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=506006 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 506006 /var/tmp/bdevperf.sock 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 506006 ']' 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.844 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.844 [2024-12-09 05:15:04.165215] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:21.844 [2024-12-09 05:15:04.165272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506006 ] 00:20:21.844 [2024-12-09 05:15:04.253497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.845 [2024-12-09 05:15:04.293561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.783 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.783 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:22.783 05:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YdpNxYedeR 00:20:22.783 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.042 [2024-12-09 05:15:05.364197] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.042 [2024-12-09 05:15:05.371272] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:23.042 [2024-12-09 05:15:05.371306] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:23.042 [2024-12-09 05:15:05.371348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:23.042 [2024-12-09 05:15:05.372371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6700 (107): Transport endpoint is not connected 00:20:23.042 [2024-12-09 05:15:05.373364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6700 (9): Bad file descriptor 00:20:23.042 [2024-12-09 05:15:05.374365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:23.042 [2024-12-09 05:15:05.374387] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:23.042 [2024-12-09 05:15:05.374396] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:23.042 [2024-12-09 05:15:05.374406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:23.042 request: 00:20:23.042 { 00:20:23.042 "name": "TLSTEST", 00:20:23.042 "trtype": "tcp", 00:20:23.042 "traddr": "10.0.0.2", 00:20:23.042 "adrfam": "ipv4", 00:20:23.042 "trsvcid": "4420", 00:20:23.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:23.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.042 "prchk_reftag": false, 00:20:23.042 "prchk_guard": false, 00:20:23.042 "hdgst": false, 00:20:23.042 "ddgst": false, 00:20:23.042 "psk": "key0", 00:20:23.042 "allow_unrecognized_csi": false, 00:20:23.042 "method": "bdev_nvme_attach_controller", 00:20:23.042 "req_id": 1 00:20:23.042 } 00:20:23.042 Got JSON-RPC error response 00:20:23.042 response: 00:20:23.042 { 00:20:23.042 "code": -5, 00:20:23.042 "message": "Input/output error" 00:20:23.042 } 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 506006 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 506006 ']' 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 506006 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506006 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506006' 00:20:23.042 killing process with pid 506006 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 506006 00:20:23.042 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.042 00:20:23.042 Latency(us) 00:20:23.042 [2024-12-09T04:15:05.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.042 [2024-12-09T04:15:05.512Z] =================================================================================================================== 00:20:23.042 [2024-12-09T04:15:05.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.042 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 506006 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=506569 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 506569 /var/tmp/bdevperf.sock 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 506569 ']' 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.300 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.301 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.301 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.301 05:15:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.301 [2024-12-09 05:15:05.701530] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:23.301 [2024-12-09 05:15:05.701586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506569 ] 00:20:23.560 [2024-12-09 05:15:05.790737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.560 [2024-12-09 05:15:05.830660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.128 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.128 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.128 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:24.387 [2024-12-09 05:15:06.711826] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:24.387 [2024-12-09 05:15:06.711854] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:24.387 request: 00:20:24.387 { 00:20:24.387 "name": "key0", 00:20:24.387 "path": "", 00:20:24.387 "method": "keyring_file_add_key", 00:20:24.387 "req_id": 1 00:20:24.387 } 00:20:24.387 Got JSON-RPC error response 00:20:24.387 response: 00:20:24.387 { 00:20:24.387 "code": -1, 00:20:24.387 "message": "Operation not permitted" 00:20:24.387 } 00:20:24.387 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.646 [2024-12-09 05:15:06.900405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.646 [2024-12-09 05:15:06.900439] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:24.646 request: 00:20:24.646 { 00:20:24.646 "name": "TLSTEST", 00:20:24.646 "trtype": "tcp", 00:20:24.646 "traddr": "10.0.0.2", 00:20:24.646 "adrfam": "ipv4", 00:20:24.646 "trsvcid": "4420", 00:20:24.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.646 "prchk_reftag": false, 00:20:24.646 "prchk_guard": false, 00:20:24.646 "hdgst": false, 00:20:24.646 "ddgst": false, 00:20:24.646 "psk": "key0", 00:20:24.646 "allow_unrecognized_csi": false, 00:20:24.646 "method": "bdev_nvme_attach_controller", 00:20:24.646 "req_id": 1 00:20:24.646 } 00:20:24.646 Got JSON-RPC error response 00:20:24.646 response: 00:20:24.646 { 00:20:24.646 "code": -126, 00:20:24.646 "message": "Required key not available" 00:20:24.646 } 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 506569 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 506569 ']' 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 506569 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506569 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506569' 00:20:24.646 killing process with pid 506569 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 506569 00:20:24.646 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.646 00:20:24.646 Latency(us) 00:20:24.646 [2024-12-09T04:15:07.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.646 [2024-12-09T04:15:07.116Z] =================================================================================================================== 00:20:24.646 [2024-12-09T04:15:07.116Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.646 05:15:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 506569 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 500645 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 500645 ']' 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 500645 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 500645 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 500645' 00:20:24.905 killing process with pid 500645 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 500645 00:20:24.905 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 500645 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.faGvp4uipX 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.faGvp4uipX 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=506883 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 506883 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 506883 ']' 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.165 05:15:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.165 [2024-12-09 05:15:07.546485] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:25.165 [2024-12-09 05:15:07.546533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.423 [2024-12-09 05:15:07.640704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.423 [2024-12-09 05:15:07.678447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.423 [2024-12-09 05:15:07.678486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.423 [2024-12-09 05:15:07.678496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.423 [2024-12-09 05:15:07.678504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.423 [2024-12-09 05:15:07.678512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.423 [2024-12-09 05:15:07.679123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.990 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.990 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.990 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faGvp4uipX 00:20:25.991 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.249 [2024-12-09 05:15:08.616622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.249 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:26.509 05:15:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:26.767 [2024-12-09 05:15:09.013642] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.768 [2024-12-09 05:15:09.013868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.768 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:26.768 malloc0 00:20:26.768 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.026 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:27.285 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faGvp4uipX 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.faGvp4uipX 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=507203 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 507203 /var/tmp/bdevperf.sock 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 507203 ']' 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.545 05:15:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.545 [2024-12-09 05:15:09.810047] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:27.545 [2024-12-09 05:15:09.810096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507203 ] 00:20:27.545 [2024-12-09 05:15:09.903309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.545 [2024-12-09 05:15:09.943266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.804 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.804 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.804 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:27.804 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.063 [2024-12-09 05:15:10.425051] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.063 TLSTESTn1 00:20:28.063 05:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:28.322 Running I/O for 10 seconds... 00:20:30.196 5363.00 IOPS, 20.95 MiB/s [2024-12-09T04:15:14.052Z] 4647.50 IOPS, 18.15 MiB/s [2024-12-09T04:15:14.617Z] 4784.67 IOPS, 18.69 MiB/s [2024-12-09T04:15:15.993Z] 4920.25 IOPS, 19.22 MiB/s [2024-12-09T04:15:16.929Z] 4897.60 IOPS, 19.13 MiB/s [2024-12-09T04:15:17.865Z] 4939.83 IOPS, 19.30 MiB/s [2024-12-09T04:15:18.803Z] 4951.43 IOPS, 19.34 MiB/s [2024-12-09T04:15:19.741Z] 4833.75 IOPS, 18.88 MiB/s [2024-12-09T04:15:20.679Z] 4775.11 IOPS, 18.65 MiB/s [2024-12-09T04:15:20.679Z] 4747.40 IOPS, 18.54 MiB/s 00:20:38.209 Latency(us) 00:20:38.209 [2024-12-09T04:15:20.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.209 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.209 Verification LBA range: start 0x0 length 0x2000 00:20:38.209 TLSTESTn1 : 10.02 4748.93 18.55 0.00 0.00 26908.93 6606.03 33135.00 00:20:38.209 [2024-12-09T04:15:20.679Z] =================================================================================================================== 00:20:38.209 [2024-12-09T04:15:20.679Z] Total : 4748.93 18.55 0.00 0.00 26908.93 6606.03 33135.00 00:20:38.209 { 00:20:38.209 "results": [ 00:20:38.209 { 00:20:38.209 "job": "TLSTESTn1", 00:20:38.209 "core_mask": "0x4", 00:20:38.209 "workload": "verify", 00:20:38.209 "status": "finished", 00:20:38.209 "verify_range": { 00:20:38.209 "start": 0, 00:20:38.209 "length": 8192 00:20:38.209 }, 00:20:38.209 "queue_depth": 128, 00:20:38.209 "io_size": 4096, 00:20:38.209 "runtime": 10.023512, 00:20:38.209 "iops": 4748.934305660531, 00:20:38.209 "mibps": 18.55052463148645, 00:20:38.209 "io_failed": 0, 00:20:38.209 "io_timeout": 0, 00:20:38.209 "avg_latency_us": 26908.929603731012, 00:20:38.209 "min_latency_us": 6606.0288, 00:20:38.209 "max_latency_us": 33135.0016 00:20:38.209 } 00:20:38.209 ], 00:20:38.209 "core_count": 1 00:20:38.209 } 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 507203 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 507203 ']' 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 507203 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 507203 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 507203' 00:20:38.468 killing process with pid 507203 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 507203 00:20:38.468 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.468 00:20:38.468 Latency(us) 00:20:38.468 [2024-12-09T04:15:20.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.468 [2024-12-09T04:15:20.938Z] =================================================================================================================== 00:20:38.468 [2024-12-09T04:15:20.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.468 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 507203 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.faGvp4uipX 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faGvp4uipX 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faGvp4uipX 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.faGvp4uipX 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.faGvp4uipX 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=509106 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 509106 /var/tmp/bdevperf.sock 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 509106 ']' 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.727 05:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.727 [2024-12-09 05:15:21.012450] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:38.727 [2024-12-09 05:15:21.012504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509106 ] 00:20:38.727 [2024-12-09 05:15:21.098139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.727 [2024-12-09 05:15:21.134254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.985 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.985 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.985 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:38.985 [2024-12-09 05:15:21.406814] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.faGvp4uipX': 0100666 00:20:38.985 [2024-12-09 05:15:21.406847] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:38.985 request: 00:20:38.985 { 00:20:38.985 "name": "key0", 00:20:38.985 "path": "/tmp/tmp.faGvp4uipX", 00:20:38.985 "method": "keyring_file_add_key", 00:20:38.985 "req_id": 1 00:20:38.985 } 00:20:38.985 Got JSON-RPC error response 00:20:38.985 response: 00:20:38.985 { 00:20:38.985 "code": -1, 00:20:38.985 "message": "Operation not permitted" 00:20:38.985 } 00:20:38.985 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.243 [2024-12-09 05:15:21.599396] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.243 [2024-12-09 05:15:21.599426] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:39.243 request: 00:20:39.243 { 00:20:39.243 "name": "TLSTEST", 00:20:39.243 "trtype": "tcp", 00:20:39.243 "traddr": "10.0.0.2", 00:20:39.243 "adrfam": "ipv4", 00:20:39.243 "trsvcid": "4420", 00:20:39.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.243 "prchk_reftag": false, 00:20:39.243 "prchk_guard": false, 00:20:39.243 "hdgst": false, 00:20:39.243 "ddgst": false, 00:20:39.243 "psk": "key0", 00:20:39.243 "allow_unrecognized_csi": false, 00:20:39.243 "method": "bdev_nvme_attach_controller", 00:20:39.243 "req_id": 1 00:20:39.243 } 00:20:39.243 Got JSON-RPC error response 00:20:39.243 response: 00:20:39.243 { 00:20:39.243 "code": -126, 00:20:39.243 "message": "Required key not available" 00:20:39.243 } 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 509106 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 509106 ']' 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 509106 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509106 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509106' 00:20:39.243 killing process with pid 509106 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 509106 00:20:39.243 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.243 00:20:39.243 Latency(us) 00:20:39.243 [2024-12-09T04:15:21.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.243 [2024-12-09T04:15:21.713Z] =================================================================================================================== 00:20:39.243 [2024-12-09T04:15:21.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.243 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 509106 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 506883 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 506883 ']' 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 506883 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506883 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506883' 00:20:39.502 killing process with pid 506883 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 506883 00:20:39.502 05:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 506883 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=509317 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 509317 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 509317 ']' 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.761 05:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.761 [2024-12-09 05:15:22.214913] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:39.761 [2024-12-09 05:15:22.214963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.020 [2024-12-09 05:15:22.311743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.020 [2024-12-09 05:15:22.346989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.020 [2024-12-09 05:15:22.347020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.020 [2024-12-09 05:15:22.347029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.020 [2024-12-09 05:15:22.347037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.020 [2024-12-09 05:15:22.347047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.020 [2024-12-09 05:15:22.347652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.588 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.588 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.588 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.588 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.588 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faGvp4uipX 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.847 [2024-12-09 05:15:23.262761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.847 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.107 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.367 [2024-12-09 05:15:23.655763] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.367 [2024-12-09 05:15:23.655963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.367 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.626 malloc0 00:20:41.626 05:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.626 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:41.885 [2024-12-09 05:15:24.233237] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.faGvp4uipX': 0100666 00:20:41.885 [2024-12-09 05:15:24.233265] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:41.885 request: 00:20:41.885 { 00:20:41.885 "name": "key0", 00:20:41.885 "path": "/tmp/tmp.faGvp4uipX", 00:20:41.885 "method": "keyring_file_add_key", 00:20:41.885 "req_id": 1 00:20:41.885 } 00:20:41.885 Got JSON-RPC error response 00:20:41.885 response: 00:20:41.885 { 00:20:41.885 "code": -1, 00:20:41.885 "message": "Operation not permitted" 00:20:41.885 } 00:20:41.885 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.144 [2024-12-09 05:15:24.433764] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:42.144 [2024-12-09 05:15:24.433796] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:42.144 request: 00:20:42.144 { 00:20:42.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.144 "host": "nqn.2016-06.io.spdk:host1", 00:20:42.144 "psk": "key0", 00:20:42.144 "method": "nvmf_subsystem_add_host", 00:20:42.144 "req_id": 1 00:20:42.144 } 00:20:42.144 Got JSON-RPC error response 00:20:42.144 response: 00:20:42.144 { 00:20:42.144 "code": -32603, 00:20:42.144 "message": "Internal error" 00:20:42.144 } 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 509317 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 509317 ']' 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 509317 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509317 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509317' 00:20:42.144 killing process with pid 509317 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 509317 00:20:42.144 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 509317 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.faGvp4uipX 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=509870 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 509870 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 509870 ']' 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.404 05:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.404 [2024-12-09 05:15:24.787731] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:42.404 [2024-12-09 05:15:24.787780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.663 [2024-12-09 05:15:24.885040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.663 [2024-12-09 05:15:24.921066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.663 [2024-12-09 05:15:24.921107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.663 [2024-12-09 05:15:24.921116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.663 [2024-12-09 05:15:24.921123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.663 [2024-12-09 05:15:24.921145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.663 [2024-12-09 05:15:24.921718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faGvp4uipX 00:20:43.230 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.489 [2024-12-09 05:15:25.847503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.489 05:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.761 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.761 [2024-12-09 05:15:26.212426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.761 [2024-12-09 05:15:26.212664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.761 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.021 malloc0 00:20:44.021 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.280 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=510176 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 510176 /var/tmp/bdevperf.sock 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 510176 ']' 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.540 05:15:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.799 [2024-12-09 05:15:27.031877] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:44.799 [2024-12-09 05:15:27.031931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510176 ] 00:20:44.799 [2024-12-09 05:15:27.122901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.799 [2024-12-09 05:15:27.164374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.734 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.734 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:45.734 05:15:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:20:45.734 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.992 [2024-12-09 05:15:28.214116] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.992 TLSTESTn1 00:20:45.992 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:46.251 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:46.251 "subsystems": [ 00:20:46.251 { 00:20:46.251 "subsystem": "keyring", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "keyring_file_add_key", 00:20:46.251 "params": { 00:20:46.251 "name": "key0", 00:20:46.251 "path": "/tmp/tmp.faGvp4uipX" 00:20:46.251 } 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "iobuf", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "iobuf_set_options", 00:20:46.251 "params": { 00:20:46.251 "small_pool_count": 8192, 00:20:46.251 "large_pool_count": 1024, 00:20:46.251 "small_bufsize": 8192, 00:20:46.251 "large_bufsize": 135168, 00:20:46.251 "enable_numa": false 00:20:46.251 } 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "sock", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "sock_set_default_impl", 00:20:46.251 "params": { 00:20:46.251 "impl_name": "posix" 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "sock_impl_set_options", 00:20:46.251 "params": { 00:20:46.251 "impl_name": "ssl", 00:20:46.251 "recv_buf_size": 4096, 00:20:46.251 "send_buf_size": 4096, 00:20:46.251 "enable_recv_pipe": true, 00:20:46.251 "enable_quickack": false, 00:20:46.251 "enable_placement_id": 0, 00:20:46.251 "enable_zerocopy_send_server": true, 00:20:46.251 "enable_zerocopy_send_client": false, 00:20:46.251 "zerocopy_threshold": 0, 00:20:46.251 "tls_version": 0, 00:20:46.251 "enable_ktls": false 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "sock_impl_set_options", 00:20:46.251 "params": { 00:20:46.251 "impl_name": "posix", 00:20:46.251 "recv_buf_size": 2097152, 00:20:46.251 "send_buf_size": 2097152, 00:20:46.251 "enable_recv_pipe": true, 00:20:46.251 "enable_quickack": false, 00:20:46.251 "enable_placement_id": 0, 00:20:46.251 "enable_zerocopy_send_server": true, 00:20:46.251 "enable_zerocopy_send_client": false, 00:20:46.251 "zerocopy_threshold": 0, 00:20:46.251 "tls_version": 0, 00:20:46.251 "enable_ktls": false 00:20:46.251 } 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "vmd", 00:20:46.251 "config": [] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "accel", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "accel_set_options", 00:20:46.251 "params": { 00:20:46.251 "small_cache_size": 128, 00:20:46.251 "large_cache_size": 16, 00:20:46.251 "task_count": 2048, 00:20:46.251 "sequence_count": 2048, 00:20:46.251 "buf_count": 2048 00:20:46.251 } 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "bdev", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "bdev_set_options", 00:20:46.251 "params": { 00:20:46.251 "bdev_io_pool_size": 65535, 00:20:46.251 "bdev_io_cache_size": 256, 00:20:46.251 "bdev_auto_examine": true, 00:20:46.251 "iobuf_small_cache_size": 128, 00:20:46.251 "iobuf_large_cache_size": 16 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_raid_set_options", 00:20:46.251 "params": { 00:20:46.251 "process_window_size_kb": 1024, 00:20:46.251 "process_max_bandwidth_mb_sec": 0 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_iscsi_set_options", 00:20:46.251 "params": { 00:20:46.251 "timeout_sec": 30 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_nvme_set_options", 00:20:46.251 "params": { 00:20:46.251 "action_on_timeout": "none", 00:20:46.251 "timeout_us": 0, 00:20:46.251 "timeout_admin_us": 0, 00:20:46.251 "keep_alive_timeout_ms": 10000, 00:20:46.251 "arbitration_burst": 0, 00:20:46.251 "low_priority_weight": 0, 00:20:46.251 "medium_priority_weight": 0, 00:20:46.251 "high_priority_weight": 0, 00:20:46.251 "nvme_adminq_poll_period_us": 10000, 00:20:46.251 "nvme_ioq_poll_period_us": 0, 00:20:46.251 "io_queue_requests": 0, 00:20:46.251 "delay_cmd_submit": true, 00:20:46.251 "transport_retry_count": 4, 00:20:46.251 "bdev_retry_count": 3, 00:20:46.251 "transport_ack_timeout": 0, 00:20:46.251 "ctrlr_loss_timeout_sec": 0, 00:20:46.251 "reconnect_delay_sec": 0, 00:20:46.251 "fast_io_fail_timeout_sec": 0, 00:20:46.251 "disable_auto_failback": false, 00:20:46.251 "generate_uuids": false, 00:20:46.251 "transport_tos": 0, 00:20:46.251 "nvme_error_stat": false, 00:20:46.251 "rdma_srq_size": 0, 00:20:46.251 "io_path_stat": false, 00:20:46.251 "allow_accel_sequence": false, 00:20:46.251 "rdma_max_cq_size": 0, 00:20:46.251 "rdma_cm_event_timeout_ms": 0, 00:20:46.251 "dhchap_digests": [ 00:20:46.251 "sha256", 00:20:46.251 "sha384", 00:20:46.251 "sha512" 00:20:46.251 ], 00:20:46.251 "dhchap_dhgroups": [ 00:20:46.251 "null", 00:20:46.251 "ffdhe2048", 00:20:46.251 "ffdhe3072", 00:20:46.251 "ffdhe4096", 00:20:46.251 "ffdhe6144", 00:20:46.251 "ffdhe8192" 00:20:46.251 ] 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_nvme_set_hotplug", 00:20:46.251 "params": { 00:20:46.251 "period_us": 100000, 00:20:46.251 "enable": false 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_malloc_create", 00:20:46.251 "params": { 00:20:46.251 "name": "malloc0", 00:20:46.251 "num_blocks": 8192, 00:20:46.251 "block_size": 4096, 00:20:46.251 "physical_block_size": 4096, 00:20:46.251 "uuid": "aa8fc34a-2e91-409f-908f-83193dec6234", 00:20:46.251 "optimal_io_boundary": 0, 00:20:46.251 "md_size": 0, 00:20:46.251 "dif_type": 0, 00:20:46.251 "dif_is_head_of_md": false, 00:20:46.251 "dif_pi_format": 0 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "bdev_wait_for_examine" 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "nbd", 00:20:46.251 "config": [] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "scheduler", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "framework_set_scheduler", 00:20:46.251 "params": { 00:20:46.251 "name": "static" 00:20:46.251 } 00:20:46.251 } 00:20:46.251 ] 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "subsystem": "nvmf", 00:20:46.251 "config": [ 00:20:46.251 { 00:20:46.251 "method": "nvmf_set_config", 00:20:46.251 "params": { 00:20:46.251 "discovery_filter": "match_any", 00:20:46.251 "admin_cmd_passthru": { 00:20:46.251 "identify_ctrlr": false 00:20:46.251 }, 00:20:46.251 "dhchap_digests": [ 00:20:46.251 "sha256", 00:20:46.251 "sha384", 00:20:46.251 "sha512" 00:20:46.251 ], 00:20:46.251 "dhchap_dhgroups": [ 00:20:46.251 "null", 00:20:46.251 "ffdhe2048", 00:20:46.251 "ffdhe3072", 00:20:46.251 "ffdhe4096", 00:20:46.251 "ffdhe6144", 00:20:46.251 "ffdhe8192" 00:20:46.251 ] 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "nvmf_set_max_subsystems", 00:20:46.251 "params": { 00:20:46.251 "max_subsystems": 1024 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "nvmf_set_crdt", 00:20:46.251 "params": { 00:20:46.251 "crdt1": 0, 00:20:46.251 "crdt2": 0, 00:20:46.251 "crdt3": 0 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "nvmf_create_transport", 00:20:46.251 "params": { 00:20:46.251 "trtype": "TCP", 00:20:46.251 "max_queue_depth": 128, 00:20:46.251 "max_io_qpairs_per_ctrlr": 127, 00:20:46.251 "in_capsule_data_size": 4096, 00:20:46.251 "max_io_size": 131072, 00:20:46.251 "io_unit_size": 131072, 00:20:46.251 "max_aq_depth": 128, 00:20:46.251 "num_shared_buffers": 511, 00:20:46.251 "buf_cache_size": 4294967295, 00:20:46.251 "dif_insert_or_strip": false, 00:20:46.251 "zcopy": false, 00:20:46.251 "c2h_success": false, 00:20:46.251 "sock_priority": 0, 00:20:46.251 "abort_timeout_sec": 1, 00:20:46.251 "ack_timeout": 0, 00:20:46.251 "data_wr_pool_size": 0 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "nvmf_create_subsystem", 00:20:46.251 "params": { 00:20:46.251 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.251 "allow_any_host": false, 00:20:46.251 "serial_number": "SPDK00000000000001", 00:20:46.251 "model_number": "SPDK bdev Controller", 00:20:46.251 "max_namespaces": 10, 00:20:46.251 "min_cntlid": 1, 00:20:46.251 "max_cntlid": 65519, 00:20:46.251 "ana_reporting": false 00:20:46.251 } 00:20:46.251 }, 00:20:46.251 { 00:20:46.251 "method": "nvmf_subsystem_add_host", 00:20:46.251 "params": { 00:20:46.251 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.251 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.252 "psk": "key0" 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "nvmf_subsystem_add_ns", 00:20:46.252 "params": { 00:20:46.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.252 "namespace": { 00:20:46.252 "nsid": 1, 00:20:46.252 "bdev_name": "malloc0", 00:20:46.252 "nguid": "AA8FC34A2E91409F908F83193DEC6234", 00:20:46.252 "uuid": "aa8fc34a-2e91-409f-908f-83193dec6234", 00:20:46.252 "no_auto_visible": false 00:20:46.252 } 00:20:46.252 } 00:20:46.252 }, 00:20:46.252 { 00:20:46.252 "method": "nvmf_subsystem_add_listener", 00:20:46.252 "params": { 00:20:46.252 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.252 "listen_address": { 00:20:46.252 "trtype": "TCP", 00:20:46.252 "adrfam": "IPv4", 00:20:46.252 "traddr": "10.0.0.2", 00:20:46.252 "trsvcid": "4420" 00:20:46.252 }, 00:20:46.252 "secure_channel": true 00:20:46.252 } 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 } 00:20:46.252 ] 00:20:46.252 }' 00:20:46.252 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:46.511 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:46.511 "subsystems": [ 00:20:46.511 { 00:20:46.511 "subsystem": "keyring", 00:20:46.511 "config": [ 00:20:46.511 { 00:20:46.511 "method": "keyring_file_add_key", 00:20:46.511 "params": { 00:20:46.511 "name": "key0", 00:20:46.511 "path": "/tmp/tmp.faGvp4uipX" 00:20:46.511 } 00:20:46.511 } 00:20:46.511 ] 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "subsystem": "iobuf", 00:20:46.511 "config": [ 00:20:46.511 { 00:20:46.511 "method": "iobuf_set_options", 00:20:46.511 "params": { 00:20:46.511 "small_pool_count": 8192, 00:20:46.511 "large_pool_count": 1024, 00:20:46.511 "small_bufsize": 8192, 00:20:46.511 "large_bufsize": 135168, 00:20:46.511 "enable_numa": false 00:20:46.511 } 00:20:46.511 } 00:20:46.511 ] 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "subsystem": "sock", 00:20:46.511 "config": [ 00:20:46.511 { 00:20:46.511 "method": "sock_set_default_impl", 00:20:46.511 "params": { 00:20:46.511 "impl_name": "posix" 00:20:46.511 } 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "method": "sock_impl_set_options", 00:20:46.511 "params": { 00:20:46.511 "impl_name": "ssl", 00:20:46.511 "recv_buf_size": 4096, 00:20:46.511 "send_buf_size": 4096, 00:20:46.511 "enable_recv_pipe": true, 00:20:46.511 "enable_quickack": false, 00:20:46.511 "enable_placement_id": 0, 00:20:46.511 "enable_zerocopy_send_server": true, 00:20:46.511 "enable_zerocopy_send_client": false, 00:20:46.511 "zerocopy_threshold": 0, 00:20:46.511 "tls_version": 0, 00:20:46.511 "enable_ktls": false 00:20:46.511 } 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "method": "sock_impl_set_options", 00:20:46.511 "params": { 00:20:46.511 "impl_name": "posix", 00:20:46.511 "recv_buf_size": 2097152, 00:20:46.511 "send_buf_size": 2097152, 00:20:46.511 "enable_recv_pipe": true, 00:20:46.511 "enable_quickack": false, 00:20:46.511 "enable_placement_id": 0, 00:20:46.511 "enable_zerocopy_send_server": true, 00:20:46.511 "enable_zerocopy_send_client": false, 00:20:46.511 "zerocopy_threshold": 0, 00:20:46.511 "tls_version": 0, 00:20:46.511 "enable_ktls": false 00:20:46.511 } 00:20:46.511 } 00:20:46.511 ] 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "subsystem": "vmd", 00:20:46.511 "config": [] 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "subsystem": "accel", 00:20:46.511 "config": [ 00:20:46.511 { 00:20:46.511 "method": "accel_set_options", 00:20:46.511 "params": { 00:20:46.511 "small_cache_size": 128, 00:20:46.511 "large_cache_size": 16, 00:20:46.511 "task_count": 2048, 00:20:46.511 "sequence_count": 2048, 00:20:46.511 "buf_count": 2048 00:20:46.511 } 00:20:46.511 } 00:20:46.511 ] 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "subsystem": "bdev", 00:20:46.511 "config": [ 00:20:46.511 { 00:20:46.511 "method": "bdev_set_options", 00:20:46.511 "params": { 00:20:46.511 "bdev_io_pool_size": 65535, 00:20:46.511 "bdev_io_cache_size": 256, 00:20:46.511 "bdev_auto_examine": true, 00:20:46.511 "iobuf_small_cache_size": 128, 00:20:46.511 "iobuf_large_cache_size": 16 00:20:46.511 } 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "method": "bdev_raid_set_options", 00:20:46.511 "params": { 00:20:46.511 "process_window_size_kb": 1024, 00:20:46.511 "process_max_bandwidth_mb_sec": 0 00:20:46.511 } 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "method": "bdev_iscsi_set_options", 00:20:46.511 "params": { 00:20:46.511 "timeout_sec": 30 00:20:46.511 } 00:20:46.511 }, 00:20:46.511 { 00:20:46.511 "method": "bdev_nvme_set_options", 00:20:46.511 "params": { 00:20:46.511 "action_on_timeout": "none", 00:20:46.511 "timeout_us": 0, 00:20:46.511 "timeout_admin_us": 0, 00:20:46.511 "keep_alive_timeout_ms": 10000, 00:20:46.511 "arbitration_burst": 0, 00:20:46.511 "low_priority_weight": 0, 00:20:46.511 "medium_priority_weight": 0, 00:20:46.511 "high_priority_weight": 0, 00:20:46.511 "nvme_adminq_poll_period_us": 10000, 00:20:46.512 "nvme_ioq_poll_period_us": 0, 00:20:46.512 "io_queue_requests": 512, 00:20:46.512 "delay_cmd_submit": true, 00:20:46.512 "transport_retry_count": 4, 00:20:46.512 "bdev_retry_count": 3, 00:20:46.512 "transport_ack_timeout": 0, 00:20:46.512 "ctrlr_loss_timeout_sec": 0, 00:20:46.512 "reconnect_delay_sec": 0, 00:20:46.512 "fast_io_fail_timeout_sec": 0, 00:20:46.512 "disable_auto_failback": false, 00:20:46.512 "generate_uuids": false, 00:20:46.512 "transport_tos": 0, 00:20:46.512 "nvme_error_stat": false, 00:20:46.512 "rdma_srq_size": 0, 00:20:46.512 "io_path_stat": false, 00:20:46.512 "allow_accel_sequence": false, 00:20:46.512 "rdma_max_cq_size": 0, 00:20:46.512 "rdma_cm_event_timeout_ms": 0, 00:20:46.512 "dhchap_digests": [ 00:20:46.512 "sha256", 00:20:46.512 "sha384", 00:20:46.512 "sha512" 00:20:46.512 ], 00:20:46.512 "dhchap_dhgroups": [ 00:20:46.512 "null", 00:20:46.512 "ffdhe2048", 00:20:46.512 "ffdhe3072", 00:20:46.512 "ffdhe4096", 00:20:46.512 "ffdhe6144", 00:20:46.512 "ffdhe8192" 00:20:46.512 ] 00:20:46.512 } 00:20:46.512 }, 00:20:46.512 { 00:20:46.512 "method": "bdev_nvme_attach_controller", 00:20:46.512 "params": { 00:20:46.512 "name": "TLSTEST", 00:20:46.512 "trtype": "TCP", 00:20:46.512 "adrfam": "IPv4", 00:20:46.512 "traddr": "10.0.0.2", 00:20:46.512 "trsvcid": "4420", 00:20:46.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.512 "prchk_reftag": false, 00:20:46.512 "prchk_guard": false, 00:20:46.512 "ctrlr_loss_timeout_sec": 0, 00:20:46.512 "reconnect_delay_sec": 0, 00:20:46.512 "fast_io_fail_timeout_sec": 0, 00:20:46.512 "psk": "key0", 00:20:46.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.512 "hdgst": false, 00:20:46.512 "ddgst": false, 00:20:46.512 "multipath": "multipath" 00:20:46.512 } 00:20:46.512 }, 00:20:46.512 { 00:20:46.512 "method": "bdev_nvme_set_hotplug", 00:20:46.512 "params": { 00:20:46.512 "period_us": 100000, 00:20:46.512 "enable": false 00:20:46.512 } 00:20:46.512 }, 00:20:46.512 { 00:20:46.512 "method": "bdev_wait_for_examine" 00:20:46.512 } 00:20:46.512 ] 00:20:46.512 }, 00:20:46.512 { 00:20:46.512 "subsystem": "nbd", 00:20:46.512 "config": [] 00:20:46.512 } 00:20:46.512 ] 00:20:46.512 }' 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 510176 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 510176 ']' 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 510176 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510176 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510176' 00:20:46.512 killing process with pid 510176 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 510176 00:20:46.512 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.512 00:20:46.512 Latency(us) 00:20:46.512 [2024-12-09T04:15:28.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.512 [2024-12-09T04:15:28.982Z] =================================================================================================================== 00:20:46.512 [2024-12-09T04:15:28.982Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.512 05:15:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 510176 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 509870 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 509870 ']' 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 509870 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509870 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509870' 00:20:46.771 killing process with pid 509870 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 509870 00:20:46.771 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 509870 00:20:47.031 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:47.031 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.031 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.031 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:47.031 "subsystems": [ 00:20:47.031 { 00:20:47.031 "subsystem": "keyring", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "keyring_file_add_key", 00:20:47.031 "params": { 00:20:47.031 "name": "key0", 00:20:47.031 "path": "/tmp/tmp.faGvp4uipX" 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "iobuf", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "iobuf_set_options", 00:20:47.031 "params": { 00:20:47.031 "small_pool_count": 8192, 00:20:47.031 "large_pool_count": 1024, 00:20:47.031 "small_bufsize": 8192, 00:20:47.031 "large_bufsize": 135168, 00:20:47.031 "enable_numa": false 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "sock", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "sock_set_default_impl", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "posix" 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "sock_impl_set_options", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "ssl", 00:20:47.031 "recv_buf_size": 4096, 00:20:47.031 "send_buf_size": 4096, 00:20:47.031 "enable_recv_pipe": true, 00:20:47.031 "enable_quickack": false, 00:20:47.031 "enable_placement_id": 0, 00:20:47.031 "enable_zerocopy_send_server": true, 00:20:47.031 "enable_zerocopy_send_client": false, 00:20:47.031 "zerocopy_threshold": 0, 00:20:47.031 "tls_version": 0, 00:20:47.031 "enable_ktls": false 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "sock_impl_set_options", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "posix", 00:20:47.031 "recv_buf_size": 2097152, 00:20:47.031 "send_buf_size": 2097152, 00:20:47.031 "enable_recv_pipe": true, 00:20:47.031 "enable_quickack": false, 00:20:47.031 "enable_placement_id": 0, 00:20:47.031 "enable_zerocopy_send_server": true, 00:20:47.031 "enable_zerocopy_send_client": false, 00:20:47.031 "zerocopy_threshold": 0, 00:20:47.031 "tls_version": 0, 00:20:47.031 "enable_ktls": false 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "vmd", 00:20:47.031 "config": [] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "accel", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "accel_set_options", 00:20:47.031 "params": { 00:20:47.031 "small_cache_size": 128, 00:20:47.031 "large_cache_size": 16, 00:20:47.031 "task_count": 2048, 00:20:47.031 "sequence_count": 2048, 00:20:47.031 "buf_count": 2048 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "bdev", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "bdev_set_options", 00:20:47.031 "params": { 00:20:47.031 "bdev_io_pool_size": 65535, 00:20:47.031 "bdev_io_cache_size": 256, 00:20:47.031 "bdev_auto_examine": true, 00:20:47.031 "iobuf_small_cache_size": 128, 00:20:47.031 "iobuf_large_cache_size": 16 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_raid_set_options", 00:20:47.031 "params": { 00:20:47.031 "process_window_size_kb": 1024, 00:20:47.031 "process_max_bandwidth_mb_sec": 0 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_iscsi_set_options", 00:20:47.031 "params": { 00:20:47.031 "timeout_sec": 30 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_nvme_set_options", 00:20:47.031 "params": { 00:20:47.031 "action_on_timeout": "none", 00:20:47.031 "timeout_us": 0, 00:20:47.031 "timeout_admin_us": 0, 00:20:47.031 "keep_alive_timeout_ms": 10000, 00:20:47.031 "arbitration_burst": 0, 00:20:47.031 "low_priority_weight": 0, 00:20:47.031 "medium_priority_weight": 0, 00:20:47.031 "high_priority_weight": 0, 00:20:47.031 "nvme_adminq_poll_period_us": 10000, 00:20:47.031 "nvme_ioq_poll_period_us": 0, 00:20:47.031 "io_queue_requests": 0, 00:20:47.031 "delay_cmd_submit": true, 00:20:47.031 "transport_retry_count": 4, 00:20:47.031 "bdev_retry_count": 3, 00:20:47.031 "transport_ack_timeout": 0, 00:20:47.031 "ctrlr_loss_timeout_sec": 0, 00:20:47.031 "reconnect_delay_sec": 0, 00:20:47.031 "fast_io_fail_timeout_sec": 0, 00:20:47.031 "disable_auto_failback": false, 00:20:47.031 "generate_uuids": false, 00:20:47.031 "transport_tos": 0, 00:20:47.031 "nvme_error_stat": false, 00:20:47.031 "rdma_srq_size": 0, 00:20:47.031 "io_path_stat": false, 00:20:47.031 "allow_accel_sequence": false, 00:20:47.031 "rdma_max_cq_size": 0, 00:20:47.031 "rdma_cm_event_timeout_ms": 0, 00:20:47.031 "dhchap_digests": [ 00:20:47.031 "sha256", 00:20:47.031 "sha384", 00:20:47.031 "sha512" 00:20:47.031 ], 00:20:47.032 "dhchap_dhgroups": [ 00:20:47.032 "null", 00:20:47.032 "ffdhe2048", 00:20:47.032 "ffdhe3072", 00:20:47.032 "ffdhe4096", 00:20:47.032 "ffdhe6144", 00:20:47.032 "ffdhe8192" 00:20:47.032 ] 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_nvme_set_hotplug", 00:20:47.032 "params": { 00:20:47.032 "period_us": 100000, 00:20:47.032 "enable": false 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_malloc_create", 00:20:47.032 "params": { 00:20:47.032 "name": "malloc0", 00:20:47.032 "num_blocks": 8192, 00:20:47.032 "block_size": 4096, 00:20:47.032 "physical_block_size": 4096, 00:20:47.032 "uuid": "aa8fc34a-2e91-409f-908f-83193dec6234", 00:20:47.032 "optimal_io_boundary": 0, 00:20:47.032 "md_size": 0, 00:20:47.032 "dif_type": 0, 00:20:47.032 "dif_is_head_of_md": false, 00:20:47.032 "dif_pi_format": 0 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_wait_for_examine" 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "subsystem": "nbd", 00:20:47.032 "config": [] 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "subsystem": "scheduler", 00:20:47.032 "config": [ 00:20:47.032 { 00:20:47.032 "method": "framework_set_scheduler", 00:20:47.032 "params": { 00:20:47.032 "name": "static" 00:20:47.032 } 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "subsystem": "nvmf", 00:20:47.032 "config": [ 00:20:47.032 { 00:20:47.032 "method": "nvmf_set_config", 00:20:47.032 "params": { 00:20:47.032 "discovery_filter": "match_any", 00:20:47.032 "admin_cmd_passthru": { 00:20:47.032 "identify_ctrlr": false 00:20:47.032 }, 00:20:47.032 "dhchap_digests": [ 00:20:47.032 "sha256", 00:20:47.032 "sha384", 00:20:47.032 "sha512" 00:20:47.032 ], 00:20:47.032 "dhchap_dhgroups": [ 00:20:47.032 "null", 00:20:47.032 "ffdhe2048", 00:20:47.032 "ffdhe3072", 00:20:47.032 "ffdhe4096", 00:20:47.032 "ffdhe6144", 00:20:47.032 "ffdhe8192" 00:20:47.032 ] 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_set_max_subsystems", 00:20:47.032 "params": { 00:20:47.032 "max_subsystems": 1024 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_set_crdt", 00:20:47.032 "params": { 00:20:47.032 "crdt1": 0, 00:20:47.032 "crdt2": 0, 00:20:47.032 "crdt3": 0 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_create_transport", 00:20:47.032 "params": { 00:20:47.032 "trtype": "TCP", 00:20:47.032 "max_queue_depth": 128, 00:20:47.032 "max_io_qpairs_per_ctrlr": 127, 00:20:47.032 "in_capsule_data_size": 4096, 00:20:47.032 "max_io_size": 131072, 00:20:47.032 "io_unit_size": 131072, 00:20:47.032 "max_aq_depth": 128, 00:20:47.032 "num_shared_buffers": 511, 00:20:47.032 "buf_cache_size": 4294967295, 00:20:47.032 "dif_insert_or_strip": false, 00:20:47.032 "zcopy": false, 00:20:47.032 "c2h_success": false, 00:20:47.032 "sock_priority": 0, 00:20:47.032 "abort_timeout_sec": 1, 00:20:47.032 "ack_timeout": 0, 00:20:47.032 "data_wr_pool_size": 0 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_create_subsystem", 00:20:47.032 "params": { 00:20:47.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.032 "allow_any_host": false, 00:20:47.032 "serial_number": "SPDK00000000000001", 00:20:47.032 "model_number": "SPDK bdev Controller", 00:20:47.032 "max_namespaces": 10, 00:20:47.032 "min_cntlid": 1, 00:20:47.032 "max_cntlid": 65519, 00:20:47.032 "ana_reporting": false 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_subsystem_add_host", 00:20:47.032 "params": { 00:20:47.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.032 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.032 "psk": "key0" 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_subsystem_add_ns", 00:20:47.032 "params": { 00:20:47.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.032 "namespace": { 00:20:47.032 "nsid": 1, 00:20:47.032 "bdev_name": "malloc0", 00:20:47.032 "nguid": "AA8FC34A2E91409F908F83193DEC6234", 00:20:47.032 "uuid": "aa8fc34a-2e91-409f-908f-83193dec6234", 00:20:47.032 "no_auto_visible": false 00:20:47.032 } 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "nvmf_subsystem_add_listener", 00:20:47.032 "params": { 00:20:47.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.032 "listen_address": { 00:20:47.032 "trtype": "TCP", 00:20:47.032 "adrfam": "IPv4", 00:20:47.032 "traddr": "10.0.0.2", 00:20:47.032 "trsvcid": "4420" 00:20:47.032 }, 00:20:47.032 "secure_channel": true 00:20:47.032 } 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }' 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=510721 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 510721 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 510721 ']' 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.032 05:15:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.032 [2024-12-09 05:15:29.435992] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:47.032 [2024-12-09 05:15:29.436042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.295 [2024-12-09 05:15:29.535040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.295 [2024-12-09 05:15:29.575532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.295 [2024-12-09 05:15:29.575567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.295 [2024-12-09 05:15:29.575577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.295 [2024-12-09 05:15:29.575585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.295 [2024-12-09 05:15:29.575608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.295 [2024-12-09 05:15:29.576230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.554 [2024-12-09 05:15:29.789125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.554 [2024-12-09 05:15:29.821153] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.554 [2024-12-09 05:15:29.821363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.812 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.812 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.812 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.812 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.812 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=510767 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 510767 /var/tmp/bdevperf.sock 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 510767 ']' 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.071 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:48.071 "subsystems": [ 00:20:48.071 { 00:20:48.071 "subsystem": "keyring", 00:20:48.071 "config": [ 00:20:48.071 { 00:20:48.071 "method": "keyring_file_add_key", 00:20:48.071 "params": { 00:20:48.071 "name": "key0", 00:20:48.071 "path": "/tmp/tmp.faGvp4uipX" 00:20:48.071 } 00:20:48.071 } 00:20:48.071 ] 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "subsystem": "iobuf", 00:20:48.071 "config": [ 00:20:48.071 { 00:20:48.071 "method": "iobuf_set_options", 00:20:48.071 "params": { 00:20:48.071 "small_pool_count": 8192, 00:20:48.071 "large_pool_count": 1024, 00:20:48.071 "small_bufsize": 8192, 00:20:48.071 "large_bufsize": 135168, 00:20:48.071 "enable_numa": false 00:20:48.071 } 00:20:48.071 } 00:20:48.071 ] 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "subsystem": "sock", 00:20:48.071 "config": [ 00:20:48.071 { 00:20:48.071 "method": "sock_set_default_impl", 00:20:48.071 "params": { 00:20:48.071 "impl_name": "posix" 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "sock_impl_set_options", 00:20:48.071 "params": { 00:20:48.071 "impl_name": "ssl", 00:20:48.071 "recv_buf_size": 4096, 00:20:48.071 "send_buf_size": 4096, 00:20:48.071 "enable_recv_pipe": true, 00:20:48.071 "enable_quickack": false, 00:20:48.071 "enable_placement_id": 0, 00:20:48.071 "enable_zerocopy_send_server": true, 00:20:48.071 "enable_zerocopy_send_client": false, 00:20:48.071 "zerocopy_threshold": 0, 00:20:48.071 "tls_version": 0, 00:20:48.071 "enable_ktls": false 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "sock_impl_set_options", 00:20:48.071 "params": { 00:20:48.071 "impl_name": "posix", 00:20:48.071 "recv_buf_size": 2097152, 00:20:48.071 "send_buf_size": 2097152, 00:20:48.071 "enable_recv_pipe": true, 00:20:48.071 "enable_quickack": false, 00:20:48.071 "enable_placement_id": 0, 00:20:48.071 "enable_zerocopy_send_server": true, 00:20:48.071 "enable_zerocopy_send_client": false, 00:20:48.071 "zerocopy_threshold": 0, 00:20:48.071 "tls_version": 0, 00:20:48.071 "enable_ktls": false 00:20:48.071 } 00:20:48.071 } 00:20:48.071 ] 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "subsystem": "vmd", 00:20:48.071 "config": [] 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "subsystem": "accel", 00:20:48.071 "config": [ 00:20:48.071 { 00:20:48.071 "method": "accel_set_options", 00:20:48.071 "params": { 00:20:48.071 "small_cache_size": 128, 00:20:48.071 "large_cache_size": 16, 00:20:48.071 "task_count": 2048, 00:20:48.071 "sequence_count": 2048, 00:20:48.071 "buf_count": 2048 00:20:48.071 } 00:20:48.071 } 00:20:48.071 ] 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "subsystem": "bdev", 00:20:48.071 "config": [ 00:20:48.071 { 00:20:48.071 "method": "bdev_set_options", 00:20:48.071 "params": { 00:20:48.071 "bdev_io_pool_size": 65535, 00:20:48.071 "bdev_io_cache_size": 256, 00:20:48.071 "bdev_auto_examine": true, 00:20:48.071 "iobuf_small_cache_size": 128, 00:20:48.071 "iobuf_large_cache_size": 16 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "bdev_raid_set_options", 00:20:48.071 "params": { 00:20:48.071 "process_window_size_kb": 1024, 00:20:48.071 "process_max_bandwidth_mb_sec": 0 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "bdev_iscsi_set_options", 00:20:48.071 "params": { 00:20:48.071 "timeout_sec": 30 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "bdev_nvme_set_options", 00:20:48.071 "params": { 00:20:48.071 "action_on_timeout": "none", 00:20:48.071 "timeout_us": 0, 00:20:48.071 "timeout_admin_us": 0, 00:20:48.071 "keep_alive_timeout_ms": 10000, 00:20:48.071 "arbitration_burst": 0, 00:20:48.071 "low_priority_weight": 0, 00:20:48.071 "medium_priority_weight": 0, 00:20:48.071 "high_priority_weight": 0, 00:20:48.071 "nvme_adminq_poll_period_us": 10000, 00:20:48.071 "nvme_ioq_poll_period_us": 0, 00:20:48.071 "io_queue_requests": 512, 00:20:48.071 "delay_cmd_submit": true, 00:20:48.071 "transport_retry_count": 4, 00:20:48.071 "bdev_retry_count": 3, 00:20:48.071 "transport_ack_timeout": 0, 00:20:48.071 "ctrlr_loss_timeout_sec": 0, 00:20:48.071 "reconnect_delay_sec": 0, 00:20:48.071 "fast_io_fail_timeout_sec": 0, 00:20:48.071 "disable_auto_failback": false, 00:20:48.071 "generate_uuids": false, 00:20:48.071 "transport_tos": 0, 00:20:48.071 "nvme_error_stat": false, 00:20:48.071 "rdma_srq_size": 0, 00:20:48.071 "io_path_stat": false, 00:20:48.071 "allow_accel_sequence": false, 00:20:48.071 "rdma_max_cq_size": 0, 00:20:48.071 "rdma_cm_event_timeout_ms": 0, 00:20:48.071 "dhchap_digests": [ 00:20:48.071 "sha256", 00:20:48.071 "sha384", 00:20:48.071 "sha512" 00:20:48.071 ], 00:20:48.071 "dhchap_dhgroups": [ 00:20:48.071 "null", 00:20:48.071 "ffdhe2048", 00:20:48.071 "ffdhe3072", 00:20:48.071 "ffdhe4096", 00:20:48.071 "ffdhe6144", 00:20:48.071 "ffdhe8192" 00:20:48.071 ] 00:20:48.071 } 00:20:48.071 }, 00:20:48.071 { 00:20:48.071 "method": "bdev_nvme_attach_controller", 00:20:48.071 "params": { 00:20:48.071 "name": "TLSTEST", 00:20:48.071 "trtype": "TCP", 00:20:48.071 "adrfam": "IPv4", 00:20:48.071 "traddr": "10.0.0.2", 00:20:48.071 "trsvcid": "4420", 00:20:48.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.071 "prchk_reftag": false, 00:20:48.071 "prchk_guard": false, 00:20:48.071 "ctrlr_loss_timeout_sec": 0, 00:20:48.071 "reconnect_delay_sec": 0, 00:20:48.072 "fast_io_fail_timeout_sec": 0, 00:20:48.072 "psk": "key0", 00:20:48.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.072 "hdgst": false, 00:20:48.072 "ddgst": false, 00:20:48.072 "multipath": "multipath" 00:20:48.072 } 00:20:48.072 }, 00:20:48.072 { 00:20:48.072 "method": "bdev_nvme_set_hotplug", 00:20:48.072 "params": { 00:20:48.072 "period_us": 100000, 00:20:48.072 "enable": false 00:20:48.072 } 00:20:48.072 }, 00:20:48.072 { 00:20:48.072 "method": "bdev_wait_for_examine" 00:20:48.072 } 00:20:48.072 ] 00:20:48.072 }, 00:20:48.072 { 00:20:48.072 "subsystem": "nbd", 00:20:48.072 "config": [] 00:20:48.072 } 00:20:48.072 ] 00:20:48.072 }' 00:20:48.072 05:15:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.072 [2024-12-09 05:15:30.363651] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:48.072 [2024-12-09 05:15:30.363700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510767 ] 00:20:48.072 [2024-12-09 05:15:30.458580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.072 [2024-12-09 05:15:30.497939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.330 [2024-12-09 05:15:30.652819] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.896 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.897 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.897 05:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.897 Running I/O for 10 seconds... 00:20:51.208 4875.00 IOPS, 19.04 MiB/s [2024-12-09T04:15:34.616Z] 4795.00 IOPS, 18.73 MiB/s [2024-12-09T04:15:35.552Z] 4818.00 IOPS, 18.82 MiB/s [2024-12-09T04:15:36.488Z] 4729.75 IOPS, 18.48 MiB/s [2024-12-09T04:15:37.427Z] 4682.00 IOPS, 18.29 MiB/s [2024-12-09T04:15:38.364Z] 4650.50 IOPS, 18.17 MiB/s [2024-12-09T04:15:39.744Z] 4677.57 IOPS, 18.27 MiB/s [2024-12-09T04:15:40.679Z] 4690.00 IOPS, 18.32 MiB/s [2024-12-09T04:15:41.614Z] 4702.89 IOPS, 18.37 MiB/s [2024-12-09T04:15:41.614Z] 4701.90 IOPS, 18.37 MiB/s 00:20:59.144 Latency(us) 00:20:59.144 [2024-12-09T04:15:41.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.144 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.144 Verification LBA range: start 0x0 length 0x2000 00:20:59.144 TLSTESTn1 : 10.02 4706.16 18.38 0.00 0.00 27159.94 5138.02 69206.02 00:20:59.144 [2024-12-09T04:15:41.614Z] =================================================================================================================== 00:20:59.144 [2024-12-09T04:15:41.614Z] Total : 4706.16 18.38 0.00 0.00 27159.94 5138.02 69206.02 00:20:59.144 { 00:20:59.144 "results": [ 00:20:59.144 { 00:20:59.144 "job": "TLSTESTn1", 00:20:59.144 "core_mask": "0x4", 00:20:59.144 "workload": "verify", 00:20:59.144 "status": "finished", 00:20:59.144 "verify_range": { 00:20:59.144 "start": 0, 00:20:59.144 "length": 8192 00:20:59.144 }, 00:20:59.144 "queue_depth": 128, 00:20:59.144 "io_size": 4096, 00:20:59.144 "runtime": 10.018141, 00:20:59.144 "iops": 4706.162550517107, 00:20:59.144 "mibps": 18.38344746295745, 00:20:59.144 "io_failed": 0, 00:20:59.144 "io_timeout": 0, 00:20:59.144 "avg_latency_us": 27159.94018042293, 00:20:59.144 "min_latency_us": 5138.0224, 00:20:59.144 "max_latency_us": 69206.016 00:20:59.144 } 00:20:59.144 ], 00:20:59.144 "core_count": 1 00:20:59.144 } 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 510767 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 510767 ']' 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 510767 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510767 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510767' 00:20:59.144 killing process with pid 510767 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 510767 00:20:59.144 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.144 00:20:59.144 Latency(us) 00:20:59.144 [2024-12-09T04:15:41.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.144 [2024-12-09T04:15:41.614Z] =================================================================================================================== 00:20:59.144 [2024-12-09T04:15:41.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.144 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 510767 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 510721 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 510721 ']' 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 510721 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 510721 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 510721' 00:20:59.402 killing process with pid 510721 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 510721 00:20:59.402 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 510721 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=512836 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 512836 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 512836 ']' 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.662 05:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.662 [2024-12-09 05:15:41.959661] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:20:59.662 [2024-12-09 05:15:41.959712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.662 [2024-12-09 05:15:42.055925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.662 [2024-12-09 05:15:42.091203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.662 [2024-12-09 05:15:42.091242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.662 [2024-12-09 05:15:42.091251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.662 [2024-12-09 05:15:42.091259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.662 [2024-12-09 05:15:42.091265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.662 [2024-12-09 05:15:42.091870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.faGvp4uipX 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.faGvp4uipX 00:21:00.598 05:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:00.598 [2024-12-09 05:15:42.998471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.599 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.858 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.117 [2024-12-09 05:15:43.391478] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.117 [2024-12-09 05:15:43.391711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.117 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.376 malloc0 00:21:01.376 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:01.376 05:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:21:01.635 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=513173 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 513173 /var/tmp/bdevperf.sock 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 513173 ']' 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.895 05:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.895 [2024-12-09 05:15:44.248123] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:01.895 [2024-12-09 05:15:44.248175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513173 ] 00:21:01.895 [2024-12-09 05:15:44.340894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.174 [2024-12-09 05:15:44.381584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.739 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.739 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.739 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:21:02.997 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:02.997 [2024-12-09 05:15:45.456433] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.255 nvme0n1 00:21:03.255 05:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.255 Running I/O for 1 seconds... 00:21:04.449 4420.00 IOPS, 17.27 MiB/s 00:21:04.449 Latency(us) 00:21:04.449 [2024-12-09T04:15:46.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.449 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:04.449 Verification LBA range: start 0x0 length 0x2000 00:21:04.449 nvme0n1 : 1.04 4377.80 17.10 0.00 0.00 28759.04 4875.88 37748.74 00:21:04.449 [2024-12-09T04:15:46.919Z] =================================================================================================================== 00:21:04.449 [2024-12-09T04:15:46.919Z] Total : 4377.80 17.10 0.00 0.00 28759.04 4875.88 37748.74 00:21:04.449 { 00:21:04.449 "results": [ 00:21:04.449 { 00:21:04.449 "job": "nvme0n1", 00:21:04.449 "core_mask": "0x2", 00:21:04.449 "workload": "verify", 00:21:04.449 "status": "finished", 00:21:04.449 "verify_range": { 00:21:04.449 "start": 0, 00:21:04.449 "length": 8192 00:21:04.449 }, 00:21:04.449 "queue_depth": 128, 00:21:04.449 "io_size": 4096, 00:21:04.449 "runtime": 1.038878, 00:21:04.449 "iops": 4377.799895656661, 00:21:04.449 "mibps": 17.10078084240883, 00:21:04.449 "io_failed": 0, 00:21:04.449 "io_timeout": 0, 00:21:04.449 "avg_latency_us": 28759.04414283202, 00:21:04.449 "min_latency_us": 4875.8784, 00:21:04.449 "max_latency_us": 37748.736 00:21:04.449 } 00:21:04.449 ], 00:21:04.449 "core_count": 1 00:21:04.449 } 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 513173 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 513173 ']' 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 513173 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513173 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513173' 00:21:04.449 killing process with pid 513173 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 513173 00:21:04.449 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.449 00:21:04.449 Latency(us) 00:21:04.449 [2024-12-09T04:15:46.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.449 [2024-12-09T04:15:46.919Z] =================================================================================================================== 00:21:04.449 [2024-12-09T04:15:46.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.449 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 513173 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 512836 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 512836 ']' 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 512836 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.707 05:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 512836 00:21:04.707 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.707 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.707 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 512836' 00:21:04.707 killing process with pid 512836 00:21:04.707 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 512836 00:21:04.707 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 512836 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=513731 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 513731 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 513731 ']' 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.966 05:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.966 [2024-12-09 05:15:47.290589] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:04.966 [2024-12-09 05:15:47.290635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.966 [2024-12-09 05:15:47.387325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.966 [2024-12-09 05:15:47.427160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.966 [2024-12-09 05:15:47.427198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.966 [2024-12-09 05:15:47.427212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.966 [2024-12-09 05:15:47.427220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.966 [2024-12-09 05:15:47.427228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.966 [2024-12-09 05:15:47.427827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.901 [2024-12-09 05:15:48.189443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.901 malloc0 00:21:05.901 [2024-12-09 05:15:48.217832] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.901 [2024-12-09 05:15:48.218062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=513984 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 513984 /var/tmp/bdevperf.sock 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 513984 ']' 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.901 05:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.901 [2024-12-09 05:15:48.294555] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:05.901 [2024-12-09 05:15:48.294603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513984 ] 00:21:06.160 [2024-12-09 05:15:48.389747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.160 [2024-12-09 05:15:48.430448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.728 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.728 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.728 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.faGvp4uipX 00:21:06.987 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:07.246 [2024-12-09 05:15:49.473863] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.246 nvme0n1 00:21:07.246 05:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.246 Running I/O for 1 seconds... 00:21:08.625 4370.00 IOPS, 17.07 MiB/s 00:21:08.625 Latency(us) 00:21:08.625 [2024-12-09T04:15:51.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.625 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:08.625 Verification LBA range: start 0x0 length 0x2000 00:21:08.625 nvme0n1 : 1.02 4419.81 17.26 0.00 0.00 28721.96 7602.18 31667.00 00:21:08.625 [2024-12-09T04:15:51.095Z] =================================================================================================================== 00:21:08.625 [2024-12-09T04:15:51.095Z] Total : 4419.81 17.26 0.00 0.00 28721.96 7602.18 31667.00 00:21:08.625 { 00:21:08.625 "results": [ 00:21:08.625 { 00:21:08.625 "job": "nvme0n1", 00:21:08.625 "core_mask": "0x2", 00:21:08.625 "workload": "verify", 00:21:08.625 "status": "finished", 00:21:08.625 "verify_range": { 00:21:08.625 "start": 0, 00:21:08.625 "length": 8192 00:21:08.625 }, 00:21:08.625 "queue_depth": 128, 00:21:08.625 "io_size": 4096, 00:21:08.625 "runtime": 1.017691, 00:21:08.625 "iops": 4419.809156217359, 00:21:08.625 "mibps": 17.264879516474057, 00:21:08.625 "io_failed": 0, 00:21:08.625 "io_timeout": 0, 00:21:08.625 "avg_latency_us": 28721.96063672744, 00:21:08.625 "min_latency_us": 7602.176, 00:21:08.625 "max_latency_us": 31666.9952 00:21:08.625 } 00:21:08.625 ], 00:21:08.625 "core_count": 1 00:21:08.625 } 00:21:08.625 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:08.625 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.625 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.625 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.625 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:08.625 "subsystems": [ 00:21:08.625 { 00:21:08.625 "subsystem": "keyring", 00:21:08.625 "config": [ 00:21:08.625 { 00:21:08.625 "method": "keyring_file_add_key", 00:21:08.625 "params": { 00:21:08.625 "name": "key0", 00:21:08.625 "path": "/tmp/tmp.faGvp4uipX" 00:21:08.625 } 00:21:08.625 } 00:21:08.625 ] 00:21:08.625 }, 00:21:08.625 { 00:21:08.625 "subsystem": "iobuf", 00:21:08.626 "config": [ 00:21:08.626 { 00:21:08.626 "method": "iobuf_set_options", 00:21:08.626 "params": { 00:21:08.626 "small_pool_count": 8192, 00:21:08.626 "large_pool_count": 1024, 00:21:08.626 "small_bufsize": 8192, 00:21:08.626 "large_bufsize": 135168, 00:21:08.626 "enable_numa": false 00:21:08.626 } 00:21:08.626 } 00:21:08.626 ] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "sock", 00:21:08.626 "config": [ 00:21:08.626 { 00:21:08.626 "method": "sock_set_default_impl", 00:21:08.626 "params": { 00:21:08.626 "impl_name": "posix" 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "sock_impl_set_options", 00:21:08.626 "params": { 00:21:08.626 "impl_name": "ssl", 00:21:08.626 "recv_buf_size": 4096, 00:21:08.626 "send_buf_size": 4096, 00:21:08.626 "enable_recv_pipe": true, 00:21:08.626 "enable_quickack": false, 00:21:08.626 "enable_placement_id": 0, 00:21:08.626 "enable_zerocopy_send_server": true, 00:21:08.626 "enable_zerocopy_send_client": false, 00:21:08.626 "zerocopy_threshold": 0, 00:21:08.626 "tls_version": 0, 00:21:08.626 "enable_ktls": false 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "sock_impl_set_options", 00:21:08.626 "params": { 00:21:08.626 "impl_name": "posix", 00:21:08.626 "recv_buf_size": 2097152, 00:21:08.626 "send_buf_size": 2097152, 00:21:08.626 "enable_recv_pipe": true, 00:21:08.626 "enable_quickack": false, 00:21:08.626 "enable_placement_id": 0, 00:21:08.626 "enable_zerocopy_send_server": true, 00:21:08.626 "enable_zerocopy_send_client": false, 00:21:08.626 "zerocopy_threshold": 0, 00:21:08.626 "tls_version": 0, 00:21:08.626 "enable_ktls": false 00:21:08.626 } 00:21:08.626 } 00:21:08.626 ] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "vmd", 00:21:08.626 "config": [] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "accel", 00:21:08.626 "config": [ 00:21:08.626 { 00:21:08.626 "method": "accel_set_options", 00:21:08.626 "params": { 00:21:08.626 "small_cache_size": 128, 00:21:08.626 "large_cache_size": 16, 00:21:08.626 "task_count": 2048, 00:21:08.626 "sequence_count": 2048, 00:21:08.626 "buf_count": 2048 00:21:08.626 } 00:21:08.626 } 00:21:08.626 ] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "bdev", 00:21:08.626 "config": [ 00:21:08.626 { 00:21:08.626 "method": "bdev_set_options", 00:21:08.626 "params": { 00:21:08.626 "bdev_io_pool_size": 65535, 00:21:08.626 "bdev_io_cache_size": 256, 00:21:08.626 "bdev_auto_examine": true, 00:21:08.626 "iobuf_small_cache_size": 128, 00:21:08.626 "iobuf_large_cache_size": 16 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_raid_set_options", 00:21:08.626 "params": { 00:21:08.626 "process_window_size_kb": 1024, 00:21:08.626 "process_max_bandwidth_mb_sec": 0 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_iscsi_set_options", 00:21:08.626 "params": { 00:21:08.626 "timeout_sec": 30 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_nvme_set_options", 00:21:08.626 "params": { 00:21:08.626 "action_on_timeout": "none", 00:21:08.626 "timeout_us": 0, 00:21:08.626 "timeout_admin_us": 0, 00:21:08.626 "keep_alive_timeout_ms": 10000, 00:21:08.626 "arbitration_burst": 0, 00:21:08.626 "low_priority_weight": 0, 00:21:08.626 "medium_priority_weight": 0, 00:21:08.626 "high_priority_weight": 0, 00:21:08.626 "nvme_adminq_poll_period_us": 10000, 00:21:08.626 "nvme_ioq_poll_period_us": 0, 00:21:08.626 "io_queue_requests": 0, 00:21:08.626 "delay_cmd_submit": true, 00:21:08.626 "transport_retry_count": 4, 00:21:08.626 "bdev_retry_count": 3, 00:21:08.626 "transport_ack_timeout": 0, 00:21:08.626 "ctrlr_loss_timeout_sec": 0, 00:21:08.626 "reconnect_delay_sec": 0, 00:21:08.626 "fast_io_fail_timeout_sec": 0, 00:21:08.626 "disable_auto_failback": false, 00:21:08.626 "generate_uuids": false, 00:21:08.626 "transport_tos": 0, 00:21:08.626 "nvme_error_stat": false, 00:21:08.626 "rdma_srq_size": 0, 00:21:08.626 "io_path_stat": false, 00:21:08.626 "allow_accel_sequence": false, 00:21:08.626 "rdma_max_cq_size": 0, 00:21:08.626 "rdma_cm_event_timeout_ms": 0, 00:21:08.626 "dhchap_digests": [ 00:21:08.626 "sha256", 00:21:08.626 "sha384", 00:21:08.626 "sha512" 00:21:08.626 ], 00:21:08.626 "dhchap_dhgroups": [ 00:21:08.626 "null", 00:21:08.626 "ffdhe2048", 00:21:08.626 "ffdhe3072", 00:21:08.626 "ffdhe4096", 00:21:08.626 "ffdhe6144", 00:21:08.626 "ffdhe8192" 00:21:08.626 ] 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_nvme_set_hotplug", 00:21:08.626 "params": { 00:21:08.626 "period_us": 100000, 00:21:08.626 "enable": false 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_malloc_create", 00:21:08.626 "params": { 00:21:08.626 "name": "malloc0", 00:21:08.626 "num_blocks": 8192, 00:21:08.626 "block_size": 4096, 00:21:08.626 "physical_block_size": 4096, 00:21:08.626 "uuid": "3fa6c06a-a0ec-48a3-92a2-a6a112dbb1ba", 00:21:08.626 "optimal_io_boundary": 0, 00:21:08.626 "md_size": 0, 00:21:08.626 "dif_type": 0, 00:21:08.626 "dif_is_head_of_md": false, 00:21:08.626 "dif_pi_format": 0 00:21:08.626 } 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "method": "bdev_wait_for_examine" 00:21:08.626 } 00:21:08.626 ] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "nbd", 00:21:08.626 "config": [] 00:21:08.626 }, 00:21:08.626 { 00:21:08.626 "subsystem": "scheduler", 00:21:08.626 "config": [ 00:21:08.626 { 00:21:08.626 "method": "framework_set_scheduler", 00:21:08.626 "params": { 00:21:08.626 "name": "static" 00:21:08.626 } 00:21:08.626 } 00:21:08.626 ] 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "subsystem": "nvmf", 00:21:08.627 "config": [ 00:21:08.627 { 00:21:08.627 "method": "nvmf_set_config", 00:21:08.627 "params": { 00:21:08.627 "discovery_filter": "match_any", 00:21:08.627 "admin_cmd_passthru": { 00:21:08.627 "identify_ctrlr": false 00:21:08.627 }, 00:21:08.627 "dhchap_digests": [ 00:21:08.627 "sha256", 00:21:08.627 "sha384", 00:21:08.627 "sha512" 00:21:08.627 ], 00:21:08.627 "dhchap_dhgroups": [ 00:21:08.627 "null", 00:21:08.627 "ffdhe2048", 00:21:08.627 "ffdhe3072", 00:21:08.627 "ffdhe4096", 00:21:08.627 "ffdhe6144", 00:21:08.627 "ffdhe8192" 00:21:08.627 ] 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_set_max_subsystems", 00:21:08.627 "params": { 00:21:08.627 "max_subsystems": 1024 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_set_crdt", 00:21:08.627 "params": { 00:21:08.627 "crdt1": 0, 00:21:08.627 "crdt2": 0, 00:21:08.627 "crdt3": 0 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_create_transport", 00:21:08.627 "params": { 00:21:08.627 "trtype": "TCP", 00:21:08.627 "max_queue_depth": 128, 00:21:08.627 "max_io_qpairs_per_ctrlr": 127, 00:21:08.627 "in_capsule_data_size": 4096, 00:21:08.627 "max_io_size": 131072, 00:21:08.627 "io_unit_size": 131072, 00:21:08.627 "max_aq_depth": 128, 00:21:08.627 "num_shared_buffers": 511, 00:21:08.627 "buf_cache_size": 4294967295, 00:21:08.627 "dif_insert_or_strip": false, 00:21:08.627 "zcopy": false, 00:21:08.627 "c2h_success": false, 00:21:08.627 "sock_priority": 0, 00:21:08.627 "abort_timeout_sec": 1, 00:21:08.627 "ack_timeout": 0, 00:21:08.627 "data_wr_pool_size": 0 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_create_subsystem", 00:21:08.627 "params": { 00:21:08.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.627 "allow_any_host": false, 00:21:08.627 "serial_number": "00000000000000000000", 00:21:08.627 "model_number": "SPDK bdev Controller", 00:21:08.627 "max_namespaces": 32, 00:21:08.627 "min_cntlid": 1, 00:21:08.627 "max_cntlid": 65519, 00:21:08.627 "ana_reporting": false 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_subsystem_add_host", 00:21:08.627 "params": { 00:21:08.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.627 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.627 "psk": "key0" 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_subsystem_add_ns", 00:21:08.627 "params": { 00:21:08.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.627 "namespace": { 00:21:08.627 "nsid": 1, 00:21:08.627 "bdev_name": "malloc0", 00:21:08.627 "nguid": "3FA6C06AA0EC48A392A2A6A112DBB1BA", 00:21:08.627 "uuid": "3fa6c06a-a0ec-48a3-92a2-a6a112dbb1ba", 00:21:08.627 "no_auto_visible": false 00:21:08.627 } 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "nvmf_subsystem_add_listener", 00:21:08.627 "params": { 00:21:08.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.627 "listen_address": { 00:21:08.627 "trtype": "TCP", 00:21:08.627 "adrfam": "IPv4", 00:21:08.627 "traddr": "10.0.0.2", 00:21:08.627 "trsvcid": "4420" 00:21:08.627 }, 00:21:08.627 "secure_channel": false, 00:21:08.627 "sock_impl": "ssl" 00:21:08.627 } 00:21:08.627 } 00:21:08.627 ] 00:21:08.627 } 00:21:08.627 ] 00:21:08.627 }' 00:21:08.627 05:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:08.627 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:08.627 "subsystems": [ 00:21:08.627 { 00:21:08.627 "subsystem": "keyring", 00:21:08.627 "config": [ 00:21:08.627 { 00:21:08.627 "method": "keyring_file_add_key", 00:21:08.627 "params": { 00:21:08.627 "name": "key0", 00:21:08.627 "path": "/tmp/tmp.faGvp4uipX" 00:21:08.627 } 00:21:08.627 } 00:21:08.627 ] 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "subsystem": "iobuf", 00:21:08.627 "config": [ 00:21:08.627 { 00:21:08.627 "method": "iobuf_set_options", 00:21:08.627 "params": { 00:21:08.627 "small_pool_count": 8192, 00:21:08.627 "large_pool_count": 1024, 00:21:08.627 "small_bufsize": 8192, 00:21:08.627 "large_bufsize": 135168, 00:21:08.627 "enable_numa": false 00:21:08.627 } 00:21:08.627 } 00:21:08.627 ] 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "subsystem": "sock", 00:21:08.627 "config": [ 00:21:08.627 { 00:21:08.627 "method": "sock_set_default_impl", 00:21:08.627 "params": { 00:21:08.627 "impl_name": "posix" 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "sock_impl_set_options", 00:21:08.627 "params": { 00:21:08.627 "impl_name": "ssl", 00:21:08.627 "recv_buf_size": 4096, 00:21:08.627 "send_buf_size": 4096, 00:21:08.627 "enable_recv_pipe": true, 00:21:08.627 "enable_quickack": false, 00:21:08.627 "enable_placement_id": 0, 00:21:08.627 "enable_zerocopy_send_server": true, 00:21:08.627 "enable_zerocopy_send_client": false, 00:21:08.627 "zerocopy_threshold": 0, 00:21:08.627 "tls_version": 0, 00:21:08.627 "enable_ktls": false 00:21:08.627 } 00:21:08.627 }, 00:21:08.627 { 00:21:08.627 "method": "sock_impl_set_options", 00:21:08.627 "params": { 00:21:08.627 "impl_name": "posix", 00:21:08.627 "recv_buf_size": 2097152, 00:21:08.627 "send_buf_size": 2097152, 00:21:08.627 "enable_recv_pipe": true, 00:21:08.627 "enable_quickack": false, 00:21:08.627 "enable_placement_id": 0, 00:21:08.627 "enable_zerocopy_send_server": true, 00:21:08.627 "enable_zerocopy_send_client": false, 00:21:08.627 "zerocopy_threshold": 0, 00:21:08.628 "tls_version": 0, 00:21:08.628 "enable_ktls": false 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "vmd", 00:21:08.628 "config": [] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "accel", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "accel_set_options", 00:21:08.628 "params": { 00:21:08.628 "small_cache_size": 128, 00:21:08.628 "large_cache_size": 16, 00:21:08.628 "task_count": 2048, 00:21:08.628 "sequence_count": 2048, 00:21:08.628 "buf_count": 2048 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "bdev", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "bdev_set_options", 00:21:08.628 "params": { 00:21:08.628 "bdev_io_pool_size": 65535, 00:21:08.628 "bdev_io_cache_size": 256, 00:21:08.628 "bdev_auto_examine": true, 00:21:08.628 "iobuf_small_cache_size": 128, 00:21:08.628 "iobuf_large_cache_size": 16 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_raid_set_options", 00:21:08.628 "params": { 00:21:08.628 "process_window_size_kb": 1024, 00:21:08.628 "process_max_bandwidth_mb_sec": 0 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_iscsi_set_options", 00:21:08.628 "params": { 00:21:08.628 "timeout_sec": 30 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_nvme_set_options", 00:21:08.628 "params": { 00:21:08.628 "action_on_timeout": "none", 00:21:08.628 "timeout_us": 0, 00:21:08.628 "timeout_admin_us": 0, 00:21:08.628 "keep_alive_timeout_ms": 10000, 00:21:08.628 "arbitration_burst": 0, 00:21:08.628 "low_priority_weight": 0, 00:21:08.628 "medium_priority_weight": 0, 00:21:08.628 "high_priority_weight": 0, 00:21:08.628 "nvme_adminq_poll_period_us": 10000, 00:21:08.628 "nvme_ioq_poll_period_us": 0, 00:21:08.628 "io_queue_requests": 512, 00:21:08.628 "delay_cmd_submit": true, 00:21:08.628 "transport_retry_count": 4, 00:21:08.628 "bdev_retry_count": 3, 00:21:08.628 "transport_ack_timeout": 0, 00:21:08.628 "ctrlr_loss_timeout_sec": 0, 00:21:08.628 "reconnect_delay_sec": 0, 00:21:08.628 "fast_io_fail_timeout_sec": 0, 00:21:08.628 "disable_auto_failback": false, 00:21:08.628 "generate_uuids": false, 00:21:08.628 "transport_tos": 0, 00:21:08.628 "nvme_error_stat": false, 00:21:08.628 "rdma_srq_size": 0, 00:21:08.628 "io_path_stat": false, 00:21:08.628 "allow_accel_sequence": false, 00:21:08.628 "rdma_max_cq_size": 0, 00:21:08.628 "rdma_cm_event_timeout_ms": 0, 00:21:08.628 "dhchap_digests": [ 00:21:08.628 "sha256", 00:21:08.628 "sha384", 00:21:08.628 "sha512" 00:21:08.628 ], 00:21:08.628 "dhchap_dhgroups": [ 00:21:08.628 "null", 00:21:08.628 "ffdhe2048", 00:21:08.628 "ffdhe3072", 00:21:08.628 "ffdhe4096", 00:21:08.628 "ffdhe6144", 00:21:08.628 "ffdhe8192" 00:21:08.628 ] 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_nvme_attach_controller", 00:21:08.628 "params": { 00:21:08.628 "name": "nvme0", 00:21:08.628 "trtype": "TCP", 00:21:08.628 "adrfam": "IPv4", 00:21:08.628 "traddr": "10.0.0.2", 00:21:08.628 "trsvcid": "4420", 00:21:08.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.628 "prchk_reftag": false, 00:21:08.628 "prchk_guard": false, 00:21:08.628 "ctrlr_loss_timeout_sec": 0, 00:21:08.628 "reconnect_delay_sec": 0, 00:21:08.628 "fast_io_fail_timeout_sec": 0, 00:21:08.628 "psk": "key0", 00:21:08.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.628 "hdgst": false, 00:21:08.628 "ddgst": false, 00:21:08.628 "multipath": "multipath" 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_nvme_set_hotplug", 00:21:08.628 "params": { 00:21:08.628 "period_us": 100000, 00:21:08.628 "enable": false 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_enable_histogram", 00:21:08.628 "params": { 00:21:08.628 "name": "nvme0n1", 00:21:08.628 "enable": true 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_wait_for_examine" 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "nbd", 00:21:08.628 "config": [] 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }' 00:21:08.628 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 513984 00:21:08.628 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 513984 ']' 00:21:08.628 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 513984 00:21:08.628 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513984 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513984' 00:21:08.888 killing process with pid 513984 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 513984 00:21:08.888 Received shutdown signal, test time was about 1.000000 seconds 00:21:08.888 00:21:08.888 Latency(us) 00:21:08.888 [2024-12-09T04:15:51.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.888 [2024-12-09T04:15:51.358Z] =================================================================================================================== 00:21:08.888 [2024-12-09T04:15:51.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 513984 00:21:08.888 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 513731 00:21:08.889 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 513731 ']' 00:21:08.889 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 513731 00:21:08.889 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.889 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513731 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513731' 00:21:09.148 killing process with pid 513731 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 513731 00:21:09.148 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 513731 00:21:09.408 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:09.408 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.408 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:09.408 "subsystems": [ 00:21:09.408 { 00:21:09.408 "subsystem": "keyring", 00:21:09.408 "config": [ 00:21:09.408 { 00:21:09.408 "method": "keyring_file_add_key", 00:21:09.408 "params": { 00:21:09.408 "name": "key0", 00:21:09.408 "path": "/tmp/tmp.faGvp4uipX" 00:21:09.408 } 00:21:09.408 } 00:21:09.408 ] 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "subsystem": "iobuf", 00:21:09.408 "config": [ 00:21:09.408 { 00:21:09.408 "method": "iobuf_set_options", 00:21:09.408 "params": { 00:21:09.408 "small_pool_count": 8192, 00:21:09.408 "large_pool_count": 1024, 00:21:09.408 "small_bufsize": 8192, 00:21:09.408 "large_bufsize": 135168, 00:21:09.408 "enable_numa": false 00:21:09.408 } 00:21:09.408 } 00:21:09.408 ] 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "subsystem": "sock", 00:21:09.408 "config": [ 00:21:09.408 { 00:21:09.408 "method": "sock_set_default_impl", 00:21:09.408 "params": { 00:21:09.408 "impl_name": "posix" 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "sock_impl_set_options", 00:21:09.408 "params": { 00:21:09.408 "impl_name": "ssl", 00:21:09.408 "recv_buf_size": 4096, 00:21:09.408 "send_buf_size": 4096, 00:21:09.408 "enable_recv_pipe": true, 00:21:09.408 "enable_quickack": false, 00:21:09.408 "enable_placement_id": 0, 00:21:09.408 "enable_zerocopy_send_server": true, 00:21:09.408 "enable_zerocopy_send_client": false, 00:21:09.408 "zerocopy_threshold": 0, 00:21:09.408 "tls_version": 0, 00:21:09.408 "enable_ktls": false 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "sock_impl_set_options", 00:21:09.408 "params": { 00:21:09.408 "impl_name": "posix", 00:21:09.408 "recv_buf_size": 2097152, 00:21:09.408 "send_buf_size": 2097152, 00:21:09.408 "enable_recv_pipe": true, 00:21:09.408 "enable_quickack": false, 00:21:09.408 "enable_placement_id": 0, 00:21:09.408 "enable_zerocopy_send_server": true, 00:21:09.408 "enable_zerocopy_send_client": false, 00:21:09.408 "zerocopy_threshold": 0, 00:21:09.408 "tls_version": 0, 00:21:09.408 "enable_ktls": false 00:21:09.408 } 00:21:09.408 } 00:21:09.408 ] 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "subsystem": "vmd", 00:21:09.408 "config": [] 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "subsystem": "accel", 00:21:09.408 "config": [ 00:21:09.408 { 00:21:09.408 "method": "accel_set_options", 00:21:09.408 "params": { 00:21:09.408 "small_cache_size": 128, 00:21:09.408 "large_cache_size": 16, 00:21:09.408 "task_count": 2048, 00:21:09.408 "sequence_count": 2048, 00:21:09.408 "buf_count": 2048 00:21:09.408 } 00:21:09.408 } 00:21:09.408 ] 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "subsystem": "bdev", 00:21:09.408 "config": [ 00:21:09.408 { 00:21:09.408 "method": "bdev_set_options", 00:21:09.408 "params": { 00:21:09.408 "bdev_io_pool_size": 65535, 00:21:09.408 "bdev_io_cache_size": 256, 00:21:09.408 "bdev_auto_examine": true, 00:21:09.408 "iobuf_small_cache_size": 128, 00:21:09.408 "iobuf_large_cache_size": 16 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "bdev_raid_set_options", 00:21:09.408 "params": { 00:21:09.408 "process_window_size_kb": 1024, 00:21:09.408 "process_max_bandwidth_mb_sec": 0 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "bdev_iscsi_set_options", 00:21:09.408 "params": { 00:21:09.408 "timeout_sec": 30 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "bdev_nvme_set_options", 00:21:09.408 "params": { 00:21:09.408 "action_on_timeout": "none", 00:21:09.408 "timeout_us": 0, 00:21:09.408 "timeout_admin_us": 0, 00:21:09.408 "keep_alive_timeout_ms": 10000, 00:21:09.408 "arbitration_burst": 0, 00:21:09.408 "low_priority_weight": 0, 00:21:09.408 "medium_priority_weight": 0, 00:21:09.408 "high_priority_weight": 0, 00:21:09.408 "nvme_adminq_poll_period_us": 10000, 00:21:09.408 "nvme_ioq_poll_period_us": 0, 00:21:09.408 "io_queue_requests": 0, 00:21:09.408 "delay_cmd_submit": true, 00:21:09.408 "transport_retry_count": 4, 00:21:09.408 "bdev_retry_count": 3, 00:21:09.408 "transport_ack_timeout": 0, 00:21:09.408 "ctrlr_loss_timeout_sec": 0, 00:21:09.408 "reconnect_delay_sec": 0, 00:21:09.408 "fast_io_fail_timeout_sec": 0, 00:21:09.408 "disable_auto_failback": false, 00:21:09.408 "generate_uuids": false, 00:21:09.408 "transport_tos": 0, 00:21:09.408 "nvme_error_stat": false, 00:21:09.408 "rdma_srq_size": 0, 00:21:09.408 "io_path_stat": false, 00:21:09.408 "allow_accel_sequence": false, 00:21:09.408 "rdma_max_cq_size": 0, 00:21:09.408 "rdma_cm_event_timeout_ms": 0, 00:21:09.408 "dhchap_digests": [ 00:21:09.408 "sha256", 00:21:09.408 "sha384", 00:21:09.408 "sha512" 00:21:09.408 ], 00:21:09.408 "dhchap_dhgroups": [ 00:21:09.408 "null", 00:21:09.408 "ffdhe2048", 00:21:09.408 "ffdhe3072", 00:21:09.408 "ffdhe4096", 00:21:09.408 "ffdhe6144", 00:21:09.408 "ffdhe8192" 00:21:09.408 ] 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "bdev_nvme_set_hotplug", 00:21:09.408 "params": { 00:21:09.408 "period_us": 100000, 00:21:09.408 "enable": false 00:21:09.408 } 00:21:09.408 }, 00:21:09.408 { 00:21:09.408 "method": "bdev_malloc_create", 00:21:09.409 "params": { 00:21:09.409 "name": "malloc0", 00:21:09.409 "num_blocks": 8192, 00:21:09.409 "block_size": 4096, 00:21:09.409 "physical_block_size": 4096, 00:21:09.409 "uuid": "3fa6c06a-a0ec-48a3-92a2-a6a112dbb1ba", 00:21:09.409 "optimal_io_boundary": 0, 00:21:09.409 "md_size": 0, 00:21:09.409 "dif_type": 0, 00:21:09.409 "dif_is_head_of_md": false, 00:21:09.409 "dif_pi_format": 0 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "bdev_wait_for_examine" 00:21:09.409 } 00:21:09.409 ] 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "subsystem": "nbd", 00:21:09.409 "config": [] 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "subsystem": "scheduler", 00:21:09.409 "config": [ 00:21:09.409 { 00:21:09.409 "method": "framework_set_scheduler", 00:21:09.409 "params": { 00:21:09.409 "name": "static" 00:21:09.409 } 00:21:09.409 } 00:21:09.409 ] 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "subsystem": "nvmf", 00:21:09.409 "config": [ 00:21:09.409 { 00:21:09.409 "method": "nvmf_set_config", 00:21:09.409 "params": { 00:21:09.409 "discovery_filter": "match_any", 00:21:09.409 "admin_cmd_passthru": { 00:21:09.409 "identify_ctrlr": false 00:21:09.409 }, 00:21:09.409 "dhchap_digests": [ 00:21:09.409 "sha256", 00:21:09.409 "sha384", 00:21:09.409 "sha512" 00:21:09.409 ], 00:21:09.409 "dhchap_dhgroups": [ 00:21:09.409 "null", 00:21:09.409 "ffdhe2048", 00:21:09.409 "ffdhe3072", 00:21:09.409 "ffdhe4096", 00:21:09.409 "ffdhe6144", 00:21:09.409 "ffdhe8192" 00:21:09.409 ] 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_set_max_subsystems", 00:21:09.409 "params": { 00:21:09.409 "max_subsystems": 1024 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_set_crdt", 00:21:09.409 "params": { 00:21:09.409 "crdt1": 0, 00:21:09.409 "crdt2": 0, 00:21:09.409 "crdt3": 0 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_create_transport", 00:21:09.409 "params": { 00:21:09.409 "trtype": "TCP", 00:21:09.409 "max_queue_depth": 128, 00:21:09.409 "max_io_qpairs_per_ctrlr": 127, 00:21:09.409 "in_capsule_data_size": 4096, 00:21:09.409 "max_io_size": 131072, 00:21:09.409 "io_unit_size": 131072, 00:21:09.409 "max_aq_depth": 128, 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.409 "num_shared_buffers": 511, 00:21:09.409 "buf_cache_size": 4294967295, 00:21:09.409 "dif_insert_or_strip": false, 00:21:09.409 "zcopy": false, 00:21:09.409 "c2h_success": false, 00:21:09.409 "sock_priority": 0, 00:21:09.409 "abort_timeout_sec": 1, 00:21:09.409 "ack_timeout": 0, 00:21:09.409 "data_wr_pool_size": 0 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_create_subsystem", 00:21:09.409 "params": { 00:21:09.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.409 "allow_any_host": false, 00:21:09.409 "serial_number": "00000000000000000000", 00:21:09.409 "model_number": "SPDK bdev Controller", 00:21:09.409 "max_namespaces": 32, 00:21:09.409 "min_cntlid": 1, 00:21:09.409 "max_cntlid": 65519, 00:21:09.409 "ana_reporting": false 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_subsystem_add_host", 00:21:09.409 "params": { 00:21:09.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.409 "host": "nqn.2016-06.io.spdk:host1", 00:21:09.409 "psk": "key0" 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_subsystem_add_ns", 00:21:09.409 "params": { 00:21:09.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.409 "namespace": { 00:21:09.409 "nsid": 1, 00:21:09.409 "bdev_name": "malloc0", 00:21:09.409 "nguid": "3FA6C06AA0EC48A392A2A6A112DBB1BA", 00:21:09.409 "uuid": "3fa6c06a-a0ec-48a3-92a2-a6a112dbb1ba", 00:21:09.409 "no_auto_visible": false 00:21:09.409 } 00:21:09.409 } 00:21:09.409 }, 00:21:09.409 { 00:21:09.409 "method": "nvmf_subsystem_add_listener", 00:21:09.409 "params": { 00:21:09.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.409 "listen_address": { 00:21:09.409 "trtype": "TCP", 00:21:09.409 "adrfam": "IPv4", 00:21:09.409 "traddr": "10.0.0.2", 00:21:09.409 "trsvcid": "4420" 00:21:09.409 }, 00:21:09.409 "secure_channel": false, 00:21:09.409 "sock_impl": "ssl" 00:21:09.409 } 00:21:09.409 } 00:21:09.409 ] 00:21:09.409 } 00:21:09.409 ] 00:21:09.409 }' 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=514562 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 514562 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 514562 ']' 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.409 05:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.409 [2024-12-09 05:15:51.678010] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:09.409 [2024-12-09 05:15:51.678062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.409 [2024-12-09 05:15:51.775139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.409 [2024-12-09 05:15:51.813706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.409 [2024-12-09 05:15:51.813742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.409 [2024-12-09 05:15:51.813752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.409 [2024-12-09 05:15:51.813760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.409 [2024-12-09 05:15:51.813767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.409 [2024-12-09 05:15:51.814351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.668 [2024-12-09 05:15:52.028659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.668 [2024-12-09 05:15:52.060692] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.668 [2024-12-09 05:15:52.060906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=514623 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 514623 /var/tmp/bdevperf.sock 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 514623 ']' 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:10.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.237 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:10.237 "subsystems": [ 00:21:10.237 { 00:21:10.237 "subsystem": "keyring", 00:21:10.237 "config": [ 00:21:10.237 { 00:21:10.237 "method": "keyring_file_add_key", 00:21:10.237 "params": { 00:21:10.237 "name": "key0", 00:21:10.237 "path": "/tmp/tmp.faGvp4uipX" 00:21:10.237 } 00:21:10.237 } 00:21:10.237 ] 00:21:10.237 }, 00:21:10.237 { 00:21:10.237 "subsystem": "iobuf", 00:21:10.238 "config": [ 00:21:10.238 { 00:21:10.238 "method": "iobuf_set_options", 00:21:10.238 "params": { 00:21:10.238 "small_pool_count": 8192, 00:21:10.238 "large_pool_count": 1024, 00:21:10.238 "small_bufsize": 8192, 00:21:10.238 "large_bufsize": 135168, 00:21:10.238 "enable_numa": false 00:21:10.238 } 00:21:10.238 } 00:21:10.238 ] 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "subsystem": "sock", 00:21:10.238 "config": [ 00:21:10.238 { 00:21:10.238 "method": "sock_set_default_impl", 00:21:10.238 "params": { 00:21:10.238 "impl_name": "posix" 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "sock_impl_set_options", 00:21:10.238 "params": { 00:21:10.238 "impl_name": "ssl", 00:21:10.238 "recv_buf_size": 4096, 00:21:10.238 "send_buf_size": 4096, 00:21:10.238 "enable_recv_pipe": true, 00:21:10.238 "enable_quickack": false, 00:21:10.238 "enable_placement_id": 0, 00:21:10.238 "enable_zerocopy_send_server": true, 00:21:10.238 "enable_zerocopy_send_client": false, 00:21:10.238 "zerocopy_threshold": 0, 00:21:10.238 "tls_version": 0, 00:21:10.238 "enable_ktls": false 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "sock_impl_set_options", 00:21:10.238 "params": { 00:21:10.238 "impl_name": "posix", 00:21:10.238 "recv_buf_size": 2097152, 00:21:10.238 "send_buf_size": 2097152, 00:21:10.238 "enable_recv_pipe": true, 00:21:10.238 "enable_quickack": false, 00:21:10.238 "enable_placement_id": 0, 00:21:10.238 "enable_zerocopy_send_server": true, 00:21:10.238 "enable_zerocopy_send_client": false, 00:21:10.238 "zerocopy_threshold": 0, 00:21:10.238 "tls_version": 0, 00:21:10.238 "enable_ktls": false 00:21:10.238 } 00:21:10.238 } 00:21:10.238 ] 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "subsystem": "vmd", 00:21:10.238 "config": [] 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "subsystem": "accel", 00:21:10.238 "config": [ 00:21:10.238 { 00:21:10.238 "method": "accel_set_options", 00:21:10.238 "params": { 00:21:10.238 "small_cache_size": 128, 00:21:10.238 "large_cache_size": 16, 00:21:10.238 "task_count": 2048, 00:21:10.238 "sequence_count": 2048, 00:21:10.238 "buf_count": 2048 00:21:10.238 } 00:21:10.238 } 00:21:10.238 ] 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "subsystem": "bdev", 00:21:10.238 "config": [ 00:21:10.238 { 00:21:10.238 "method": "bdev_set_options", 00:21:10.238 "params": { 00:21:10.238 "bdev_io_pool_size": 65535, 00:21:10.238 "bdev_io_cache_size": 256, 00:21:10.238 "bdev_auto_examine": true, 00:21:10.238 "iobuf_small_cache_size": 128, 00:21:10.238 "iobuf_large_cache_size": 16 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_raid_set_options", 00:21:10.238 "params": { 00:21:10.238 "process_window_size_kb": 1024, 00:21:10.238 "process_max_bandwidth_mb_sec": 0 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_iscsi_set_options", 00:21:10.238 "params": { 00:21:10.238 "timeout_sec": 30 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_nvme_set_options", 00:21:10.238 "params": { 00:21:10.238 "action_on_timeout": "none", 00:21:10.238 "timeout_us": 0, 00:21:10.238 "timeout_admin_us": 0, 00:21:10.238 "keep_alive_timeout_ms": 10000, 00:21:10.238 "arbitration_burst": 0, 00:21:10.238 "low_priority_weight": 0, 00:21:10.238 "medium_priority_weight": 0, 00:21:10.238 "high_priority_weight": 0, 00:21:10.238 "nvme_adminq_poll_period_us": 10000, 00:21:10.238 "nvme_ioq_poll_period_us": 0, 00:21:10.238 "io_queue_requests": 512, 00:21:10.238 "delay_cmd_submit": true, 00:21:10.238 "transport_retry_count": 4, 00:21:10.238 "bdev_retry_count": 3, 00:21:10.238 "transport_ack_timeout": 0, 00:21:10.238 "ctrlr_loss_timeout_sec": 0, 00:21:10.238 "reconnect_delay_sec": 0, 00:21:10.238 "fast_io_fail_timeout_sec": 0, 00:21:10.238 "disable_auto_failback": false, 00:21:10.238 "generate_uuids": false, 00:21:10.238 "transport_tos": 0, 00:21:10.238 "nvme_error_stat": false, 00:21:10.238 "rdma_srq_size": 0, 00:21:10.238 "io_path_stat": false, 00:21:10.238 "allow_accel_sequence": false, 00:21:10.238 "rdma_max_cq_size": 0, 00:21:10.238 "rdma_cm_event_timeout_ms": 0, 00:21:10.238 "dhchap_digests": [ 00:21:10.238 "sha256", 00:21:10.238 "sha384", 00:21:10.238 "sha512" 00:21:10.238 ], 00:21:10.238 "dhchap_dhgroups": [ 00:21:10.238 "null", 00:21:10.238 "ffdhe2048", 00:21:10.238 "ffdhe3072", 00:21:10.238 "ffdhe4096", 00:21:10.238 "ffdhe6144", 00:21:10.238 "ffdhe8192" 00:21:10.238 ] 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_nvme_attach_controller", 00:21:10.238 "params": { 00:21:10.238 "name": "nvme0", 00:21:10.238 "trtype": "TCP", 00:21:10.238 "adrfam": "IPv4", 00:21:10.238 "traddr": "10.0.0.2", 00:21:10.238 "trsvcid": "4420", 00:21:10.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.238 "prchk_reftag": false, 00:21:10.238 "prchk_guard": false, 00:21:10.238 "ctrlr_loss_timeout_sec": 0, 00:21:10.238 "reconnect_delay_sec": 0, 00:21:10.238 "fast_io_fail_timeout_sec": 0, 00:21:10.238 "psk": "key0", 00:21:10.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.238 "hdgst": false, 00:21:10.238 "ddgst": false, 00:21:10.238 "multipath": "multipath" 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_nvme_set_hotplug", 00:21:10.238 "params": { 00:21:10.238 "period_us": 100000, 00:21:10.238 "enable": false 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_enable_histogram", 00:21:10.238 "params": { 00:21:10.238 "name": "nvme0n1", 00:21:10.238 "enable": true 00:21:10.238 } 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "method": "bdev_wait_for_examine" 00:21:10.238 } 00:21:10.238 ] 00:21:10.238 }, 00:21:10.238 { 00:21:10.238 "subsystem": "nbd", 00:21:10.238 "config": [] 00:21:10.238 } 00:21:10.238 ] 00:21:10.238 }' 00:21:10.238 05:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.238 [2024-12-09 05:15:52.609834] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:10.238 [2024-12-09 05:15:52.609885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514623 ] 00:21:10.238 [2024-12-09 05:15:52.702122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.498 [2024-12-09 05:15:52.742659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.498 [2024-12-09 05:15:52.895840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.067 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.067 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.067 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:11.067 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:11.326 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.327 05:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.327 Running I/O for 1 seconds... 00:21:12.523 5390.00 IOPS, 21.05 MiB/s 00:21:12.523 Latency(us) 00:21:12.523 [2024-12-09T04:15:54.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.523 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.523 Verification LBA range: start 0x0 length 0x2000 00:21:12.523 nvme0n1 : 1.02 5433.89 21.23 0.00 0.00 23372.91 4666.16 30618.42 00:21:12.523 [2024-12-09T04:15:54.993Z] =================================================================================================================== 00:21:12.523 [2024-12-09T04:15:54.993Z] Total : 5433.89 21.23 0.00 0.00 23372.91 4666.16 30618.42 00:21:12.523 { 00:21:12.523 "results": [ 00:21:12.524 { 00:21:12.524 "job": "nvme0n1", 00:21:12.524 "core_mask": "0x2", 00:21:12.524 "workload": "verify", 00:21:12.524 "status": "finished", 00:21:12.524 "verify_range": { 00:21:12.524 "start": 0, 00:21:12.524 "length": 8192 00:21:12.524 }, 00:21:12.524 "queue_depth": 128, 00:21:12.524 "io_size": 4096, 00:21:12.524 "runtime": 1.015478, 00:21:12.524 "iops": 5433.894185792306, 00:21:12.524 "mibps": 21.226149163251197, 00:21:12.524 "io_failed": 0, 00:21:12.524 "io_timeout": 0, 00:21:12.524 "avg_latency_us": 23372.912012758246, 00:21:12.524 "min_latency_us": 4666.1632, 00:21:12.524 "max_latency_us": 30618.4192 00:21:12.524 } 00:21:12.524 ], 00:21:12.524 "core_count": 1 00:21:12.524 } 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:12.524 nvmf_trace.0 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 514623 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 514623 ']' 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 514623 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514623 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514623' 00:21:12.524 killing process with pid 514623 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 514623 00:21:12.524 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.524 00:21:12.524 Latency(us) 00:21:12.524 [2024-12-09T04:15:54.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.524 [2024-12-09T04:15:54.994Z] =================================================================================================================== 00:21:12.524 [2024-12-09T04:15:54.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.524 05:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 514623 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.785 rmmod nvme_tcp 00:21:12.785 rmmod nvme_fabrics 00:21:12.785 rmmod nvme_keyring 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 514562 ']' 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 514562 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 514562 ']' 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 514562 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 514562 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 514562' 00:21:12.785 killing process with pid 514562 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 514562 00:21:12.785 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 514562 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.046 05:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YdpNxYedeR /tmp/tmp.gadTKxuAJk /tmp/tmp.faGvp4uipX 00:21:15.576 00:21:15.576 real 1m30.076s 00:21:15.576 user 2m15.008s 00:21:15.576 sys 0m35.127s 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.576 ************************************ 00:21:15.576 END TEST nvmf_tls 00:21:15.576 ************************************ 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.576 ************************************ 00:21:15.576 START TEST nvmf_fips 00:21:15.576 ************************************ 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:15.576 * Looking for test storage... 00:21:15.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.576 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.577 --rc genhtml_branch_coverage=1 00:21:15.577 --rc genhtml_function_coverage=1 00:21:15.577 --rc genhtml_legend=1 00:21:15.577 --rc geninfo_all_blocks=1 00:21:15.577 --rc geninfo_unexecuted_blocks=1 00:21:15.577 00:21:15.577 ' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.577 --rc genhtml_branch_coverage=1 00:21:15.577 --rc genhtml_function_coverage=1 00:21:15.577 --rc genhtml_legend=1 00:21:15.577 --rc geninfo_all_blocks=1 00:21:15.577 --rc geninfo_unexecuted_blocks=1 00:21:15.577 00:21:15.577 ' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.577 --rc genhtml_branch_coverage=1 00:21:15.577 --rc genhtml_function_coverage=1 00:21:15.577 --rc genhtml_legend=1 00:21:15.577 --rc geninfo_all_blocks=1 00:21:15.577 --rc geninfo_unexecuted_blocks=1 00:21:15.577 00:21:15.577 ' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.577 --rc genhtml_branch_coverage=1 00:21:15.577 --rc genhtml_function_coverage=1 00:21:15.577 --rc genhtml_legend=1 00:21:15.577 --rc geninfo_all_blocks=1 00:21:15.577 --rc geninfo_unexecuted_blocks=1 00:21:15.577 00:21:15.577 ' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.577 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:15.578 05:15:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:15.837 Error setting digest 00:21:15.837 40129769A47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:15.837 40129769A47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.837 05:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:22.731 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.731 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:22.732 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:22.732 Found net devices under 0000:af:00.0: cvl_0_0 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:22.732 Found net devices under 0000:af:00.1: cvl_0_1 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.732 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:21:22.990 00:21:22.990 --- 10.0.0.2 ping statistics --- 00:21:22.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.990 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:22.990 00:21:22.990 --- 10.0.0.1 ping statistics --- 00:21:22.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.990 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=518906 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 518906 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 518906 ']' 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.990 05:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:23.249 [2024-12-09 05:16:05.486426] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:23.249 [2024-12-09 05:16:05.486475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.249 [2024-12-09 05:16:05.585934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.249 [2024-12-09 05:16:05.622852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.249 [2024-12-09 05:16:05.622888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.249 [2024-12-09 05:16:05.622897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.249 [2024-12-09 05:16:05.622905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.249 [2024-12-09 05:16:05.622912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.249 [2024-12-09 05:16:05.623509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ngF 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ngF 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ngF 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ngF 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:24.184 [2024-12-09 05:16:06.525699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.184 [2024-12-09 05:16:06.541704] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.184 [2024-12-09 05:16:06.541929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.184 malloc0 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=519173 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 519173 /var/tmp/bdevperf.sock 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 519173 ']' 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.184 05:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:24.442 [2024-12-09 05:16:06.671734] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:24.442 [2024-12-09 05:16:06.671781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519173 ] 00:21:24.442 [2024-12-09 05:16:06.763335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.442 [2024-12-09 05:16:06.803691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.381 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.381 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:25.381 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ngF 00:21:25.381 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:25.381 [2024-12-09 05:16:07.838725] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.639 TLSTESTn1 00:21:25.639 05:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.639 Running I/O for 10 seconds... 00:21:27.951 4709.00 IOPS, 18.39 MiB/s [2024-12-09T04:16:11.373Z] 5040.00 IOPS, 19.69 MiB/s [2024-12-09T04:16:12.309Z] 5160.00 IOPS, 20.16 MiB/s [2024-12-09T04:16:13.245Z] 5258.25 IOPS, 20.54 MiB/s [2024-12-09T04:16:14.181Z] 5265.00 IOPS, 20.57 MiB/s [2024-12-09T04:16:15.117Z] 5325.00 IOPS, 20.80 MiB/s [2024-12-09T04:16:16.051Z] 5346.71 IOPS, 20.89 MiB/s [2024-12-09T04:16:17.427Z] 5297.75 IOPS, 20.69 MiB/s [2024-12-09T04:16:18.365Z] 5224.22 IOPS, 20.41 MiB/s [2024-12-09T04:16:18.365Z] 5169.50 IOPS, 20.19 MiB/s 00:21:35.895 Latency(us) 00:21:35.895 [2024-12-09T04:16:18.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.895 Verification LBA range: start 0x0 length 0x2000 00:21:35.895 TLSTESTn1 : 10.02 5173.09 20.21 0.00 0.00 24706.53 6579.81 59559.12 00:21:35.895 [2024-12-09T04:16:18.365Z] =================================================================================================================== 00:21:35.895 [2024-12-09T04:16:18.365Z] Total : 5173.09 20.21 0.00 0.00 24706.53 6579.81 59559.12 00:21:35.895 { 00:21:35.895 "results": [ 00:21:35.895 { 00:21:35.895 "job": "TLSTESTn1", 00:21:35.895 "core_mask": "0x4", 00:21:35.895 "workload": "verify", 00:21:35.895 "status": "finished", 00:21:35.895 "verify_range": { 00:21:35.895 "start": 0, 00:21:35.895 "length": 8192 00:21:35.895 }, 00:21:35.895 "queue_depth": 128, 00:21:35.895 "io_size": 4096, 00:21:35.895 "runtime": 10.017425, 00:21:35.895 "iops": 5173.0858978230435, 00:21:35.895 "mibps": 20.207366788371264, 00:21:35.895 "io_failed": 0, 00:21:35.895 "io_timeout": 0, 00:21:35.895 "avg_latency_us": 24706.533254394937, 00:21:35.895 "min_latency_us": 6579.8144, 00:21:35.895 "max_latency_us": 59559.1168 00:21:35.895 } 00:21:35.895 ], 00:21:35.895 "core_count": 1 00:21:35.895 } 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:35.895 nvmf_trace.0 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 519173 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 519173 ']' 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 519173 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519173 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519173' 00:21:35.895 killing process with pid 519173 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 519173 00:21:35.895 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.895 00:21:35.895 Latency(us) 00:21:35.895 [2024-12-09T04:16:18.365Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.895 [2024-12-09T04:16:18.365Z] =================================================================================================================== 00:21:35.895 [2024-12-09T04:16:18.365Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.895 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 519173 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:36.155 rmmod nvme_tcp 00:21:36.155 rmmod nvme_fabrics 00:21:36.155 rmmod nvme_keyring 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 518906 ']' 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 518906 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 518906 ']' 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 518906 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 518906 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 518906' 00:21:36.155 killing process with pid 518906 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 518906 00:21:36.155 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 518906 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.415 05:16:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ngF 00:21:38.954 00:21:38.954 real 0m23.249s 00:21:38.954 user 0m23.481s 00:21:38.954 sys 0m11.311s 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:38.954 ************************************ 00:21:38.954 END TEST nvmf_fips 00:21:38.954 ************************************ 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:38.954 ************************************ 00:21:38.954 START TEST nvmf_control_msg_list 00:21:38.954 ************************************ 00:21:38.954 05:16:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:38.954 * Looking for test storage... 00:21:38.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.954 --rc genhtml_branch_coverage=1 00:21:38.954 --rc genhtml_function_coverage=1 00:21:38.954 --rc genhtml_legend=1 00:21:38.954 --rc geninfo_all_blocks=1 00:21:38.954 --rc geninfo_unexecuted_blocks=1 00:21:38.954 00:21:38.954 ' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.954 --rc genhtml_branch_coverage=1 00:21:38.954 --rc genhtml_function_coverage=1 00:21:38.954 --rc genhtml_legend=1 00:21:38.954 --rc geninfo_all_blocks=1 00:21:38.954 --rc geninfo_unexecuted_blocks=1 00:21:38.954 00:21:38.954 ' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.954 --rc genhtml_branch_coverage=1 00:21:38.954 --rc genhtml_function_coverage=1 00:21:38.954 --rc genhtml_legend=1 00:21:38.954 --rc geninfo_all_blocks=1 00:21:38.954 --rc geninfo_unexecuted_blocks=1 00:21:38.954 00:21:38.954 ' 00:21:38.954 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.954 --rc genhtml_branch_coverage=1 00:21:38.954 --rc genhtml_function_coverage=1 00:21:38.954 --rc genhtml_legend=1 00:21:38.954 --rc geninfo_all_blocks=1 00:21:38.954 --rc geninfo_unexecuted_blocks=1 00:21:38.954 00:21:38.954 ' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.955 05:16:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:47.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:47.083 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:47.083 Found net devices under 0000:af:00.0: cvl_0_0 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:47.083 Found net devices under 0000:af:00.1: cvl_0_1 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.083 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:21:47.084 00:21:47.084 --- 10.0.0.2 ping statistics --- 00:21:47.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.084 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:47.084 00:21:47.084 --- 10.0.0.1 ping statistics --- 00:21:47.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.084 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=524997 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 524997 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 524997 ']' 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.084 05:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 [2024-12-09 05:16:28.536781] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:47.084 [2024-12-09 05:16:28.536836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.084 [2024-12-09 05:16:28.633389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.084 [2024-12-09 05:16:28.672830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.084 [2024-12-09 05:16:28.672868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.084 [2024-12-09 05:16:28.672877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.084 [2024-12-09 05:16:28.672886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.084 [2024-12-09 05:16:28.672892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.084 [2024-12-09 05:16:28.673485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 [2024-12-09 05:16:29.420513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 Malloc0 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:47.084 [2024-12-09 05:16:29.461129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=525084 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=525086 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=525087 00:21:47.084 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 525084 00:21:47.085 05:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:47.085 [2024-12-09 05:16:29.539616] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:47.085 [2024-12-09 05:16:29.549510] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:47.344 [2024-12-09 05:16:29.559537] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:48.299 Initializing NVMe Controllers 00:21:48.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:48.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:48.299 Initialization complete. Launching workers. 00:21:48.299 ======================================================== 00:21:48.299 Latency(us) 00:21:48.299 Device Information : IOPS MiB/s Average min max 00:21:48.299 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40892.04 40645.88 40971.65 00:21:48.299 ======================================================== 00:21:48.299 Total : 25.00 0.10 40892.04 40645.88 40971.65 00:21:48.299 00:21:48.299 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 525086 00:21:48.299 Initializing NVMe Controllers 00:21:48.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:48.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:48.299 Initialization complete. Launching workers. 00:21:48.299 ======================================================== 00:21:48.299 Latency(us) 00:21:48.299 Device Information : IOPS MiB/s Average min max 00:21:48.299 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40928.16 40725.37 41920.59 00:21:48.299 ======================================================== 00:21:48.299 Total : 25.00 0.10 40928.16 40725.37 41920.59 00:21:48.299 00:21:48.299 Initializing NVMe Controllers 00:21:48.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:48.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:48.299 Initialization complete. Launching workers. 00:21:48.299 ======================================================== 00:21:48.299 Latency(us) 00:21:48.299 Device Information : IOPS MiB/s Average min max 00:21:48.299 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6567.99 25.66 151.92 128.42 344.89 00:21:48.299 ======================================================== 00:21:48.299 Total : 6567.99 25.66 151.92 128.42 344.89 00:21:48.299 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 525087 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.558 rmmod nvme_tcp 00:21:48.558 rmmod nvme_fabrics 00:21:48.558 rmmod nvme_keyring 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 524997 ']' 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 524997 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 524997 ']' 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 524997 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 524997 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 524997' 00:21:48.558 killing process with pid 524997 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 524997 00:21:48.558 05:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 524997 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.817 05:16:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.353 00:21:51.353 real 0m12.275s 00:21:51.353 user 0m7.886s 00:21:51.353 sys 0m6.864s 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.353 ************************************ 00:21:51.353 END TEST nvmf_control_msg_list 00:21:51.353 ************************************ 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.353 05:16:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.354 ************************************ 00:21:51.354 START TEST nvmf_wait_for_buf 00:21:51.354 ************************************ 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:51.354 * Looking for test storage... 00:21:51.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.354 --rc genhtml_branch_coverage=1 00:21:51.354 --rc genhtml_function_coverage=1 00:21:51.354 --rc genhtml_legend=1 00:21:51.354 --rc geninfo_all_blocks=1 00:21:51.354 --rc geninfo_unexecuted_blocks=1 00:21:51.354 00:21:51.354 ' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.354 --rc genhtml_branch_coverage=1 00:21:51.354 --rc genhtml_function_coverage=1 00:21:51.354 --rc genhtml_legend=1 00:21:51.354 --rc geninfo_all_blocks=1 00:21:51.354 --rc geninfo_unexecuted_blocks=1 00:21:51.354 00:21:51.354 ' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.354 --rc genhtml_branch_coverage=1 00:21:51.354 --rc genhtml_function_coverage=1 00:21:51.354 --rc genhtml_legend=1 00:21:51.354 --rc geninfo_all_blocks=1 00:21:51.354 --rc geninfo_unexecuted_blocks=1 00:21:51.354 00:21:51.354 ' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.354 --rc genhtml_branch_coverage=1 00:21:51.354 --rc genhtml_function_coverage=1 00:21:51.354 --rc genhtml_legend=1 00:21:51.354 --rc geninfo_all_blocks=1 00:21:51.354 --rc geninfo_unexecuted_blocks=1 00:21:51.354 00:21:51.354 ' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:51.354 05:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:59.478 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:59.478 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.478 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:59.478 Found net devices under 0000:af:00.0: cvl_0_0 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:59.479 Found net devices under 0000:af:00.1: cvl_0_1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:21:59.479 00:21:59.479 --- 10.0.0.2 ping statistics --- 00:21:59.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.479 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:21:59.479 00:21:59.479 --- 10.0.0.1 ping statistics --- 00:21:59.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.479 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=529102 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 529102 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 529102 ']' 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.479 05:16:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 [2024-12-09 05:16:40.891807] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:21:59.479 [2024-12-09 05:16:40.891860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.479 [2024-12-09 05:16:40.991172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.479 [2024-12-09 05:16:41.034503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.479 [2024-12-09 05:16:41.034538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.479 [2024-12-09 05:16:41.034547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.479 [2024-12-09 05:16:41.034556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.479 [2024-12-09 05:16:41.034564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.479 [2024-12-09 05:16:41.035151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.479 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.480 Malloc0 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.480 [2024-12-09 05:16:41.867394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.480 [2024-12-09 05:16:41.895604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.480 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:59.739 [2024-12-09 05:16:41.990291] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:01.117 Initializing NVMe Controllers 00:22:01.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:01.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:01.117 Initialization complete. Launching workers. 00:22:01.117 ======================================================== 00:22:01.117 Latency(us) 00:22:01.117 Device Information : IOPS MiB/s Average min max 00:22:01.117 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33366.85 7263.15 71849.13 00:22:01.117 ======================================================== 00:22:01.117 Total : 125.00 15.62 33366.85 7263.15 71849.13 00:22:01.117 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.377 rmmod nvme_tcp 00:22:01.377 rmmod nvme_fabrics 00:22:01.377 rmmod nvme_keyring 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 529102 ']' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 529102 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 529102 ']' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 529102 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529102 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529102' 00:22:01.377 killing process with pid 529102 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 529102 00:22:01.377 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 529102 00:22:01.636 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.636 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.636 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.636 05:16:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.636 05:16:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.173 00:22:04.173 real 0m12.767s 00:22:04.173 user 0m5.263s 00:22:04.173 sys 0m6.222s 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:04.173 ************************************ 00:22:04.173 END TEST nvmf_wait_for_buf 00:22:04.173 ************************************ 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.173 05:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:10.749 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:10.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:10.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:10.750 Found net devices under 0000:af:00.0: cvl_0_0 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:10.750 Found net devices under 0000:af:00.1: cvl_0_1 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:10.750 ************************************ 00:22:10.750 START TEST nvmf_perf_adq 00:22:10.750 ************************************ 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:10.750 * Looking for test storage... 00:22:10.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:10.750 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.009 --rc genhtml_branch_coverage=1 00:22:11.009 --rc genhtml_function_coverage=1 00:22:11.009 --rc genhtml_legend=1 00:22:11.009 --rc geninfo_all_blocks=1 00:22:11.009 --rc geninfo_unexecuted_blocks=1 00:22:11.009 00:22:11.009 ' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.009 --rc genhtml_branch_coverage=1 00:22:11.009 --rc genhtml_function_coverage=1 00:22:11.009 --rc genhtml_legend=1 00:22:11.009 --rc geninfo_all_blocks=1 00:22:11.009 --rc geninfo_unexecuted_blocks=1 00:22:11.009 00:22:11.009 ' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.009 --rc genhtml_branch_coverage=1 00:22:11.009 --rc genhtml_function_coverage=1 00:22:11.009 --rc genhtml_legend=1 00:22:11.009 --rc geninfo_all_blocks=1 00:22:11.009 --rc geninfo_unexecuted_blocks=1 00:22:11.009 00:22:11.009 ' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.009 --rc genhtml_branch_coverage=1 00:22:11.009 --rc genhtml_function_coverage=1 00:22:11.009 --rc genhtml_legend=1 00:22:11.009 --rc geninfo_all_blocks=1 00:22:11.009 --rc geninfo_unexecuted_blocks=1 00:22:11.009 00:22:11.009 ' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.009 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.010 05:16:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:19.185 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:19.185 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:19.185 Found net devices under 0000:af:00.0: cvl_0_0 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:19.185 Found net devices under 0000:af:00.1: cvl_0_1 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:19.185 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:19.185 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:22.474 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.752 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:27.753 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:27.753 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:27.753 Found net devices under 0000:af:00.0: cvl_0_0 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:27.753 Found net devices under 0000:af:00.1: cvl_0_1 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.753 05:17:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.753 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.012 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:22:28.013 00:22:28.013 --- 10.0.0.2 ping statistics --- 00:22:28.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.013 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:28.013 00:22:28.013 --- 10.0.0.1 ping statistics --- 00:22:28.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.013 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=538422 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 538422 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 538422 ']' 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.013 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 [2024-12-09 05:17:10.332069] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:22:28.013 [2024-12-09 05:17:10.332116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.013 [2024-12-09 05:17:10.430722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.013 [2024-12-09 05:17:10.472834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.013 [2024-12-09 05:17:10.472872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.013 [2024-12-09 05:17:10.472882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.013 [2024-12-09 05:17:10.472890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.013 [2024-12-09 05:17:10.472897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.013 [2024-12-09 05:17:10.474688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.013 [2024-12-09 05:17:10.474801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.013 [2024-12-09 05:17:10.474887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.013 [2024-12-09 05:17:10.474886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 [2024-12-09 05:17:11.356092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 Malloc1 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.212 [2024-12-09 05:17:11.422937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.212 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.212 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=538576 00:22:29.212 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:29.212 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:31.115 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:31.115 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.115 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.115 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.115 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:31.115 "tick_rate": 2500000000, 00:22:31.115 "poll_groups": [ 00:22:31.115 { 00:22:31.115 "name": "nvmf_tgt_poll_group_000", 00:22:31.115 "admin_qpairs": 1, 00:22:31.115 "io_qpairs": 1, 00:22:31.115 "current_admin_qpairs": 1, 00:22:31.115 "current_io_qpairs": 1, 00:22:31.115 "pending_bdev_io": 0, 00:22:31.115 "completed_nvme_io": 20612, 00:22:31.115 "transports": [ 00:22:31.115 { 00:22:31.115 "trtype": "TCP" 00:22:31.115 } 00:22:31.115 ] 00:22:31.115 }, 00:22:31.115 { 00:22:31.115 "name": "nvmf_tgt_poll_group_001", 00:22:31.115 "admin_qpairs": 0, 00:22:31.115 "io_qpairs": 1, 00:22:31.115 "current_admin_qpairs": 0, 00:22:31.116 "current_io_qpairs": 1, 00:22:31.116 "pending_bdev_io": 0, 00:22:31.116 "completed_nvme_io": 19915, 00:22:31.116 "transports": [ 00:22:31.116 { 00:22:31.116 "trtype": "TCP" 00:22:31.116 } 00:22:31.116 ] 00:22:31.116 }, 00:22:31.116 { 00:22:31.116 "name": "nvmf_tgt_poll_group_002", 00:22:31.116 "admin_qpairs": 0, 00:22:31.116 "io_qpairs": 1, 00:22:31.116 "current_admin_qpairs": 0, 00:22:31.116 "current_io_qpairs": 1, 00:22:31.116 "pending_bdev_io": 0, 00:22:31.116 "completed_nvme_io": 20688, 00:22:31.116 "transports": [ 00:22:31.116 { 00:22:31.116 "trtype": "TCP" 00:22:31.116 } 00:22:31.116 ] 00:22:31.116 }, 00:22:31.116 { 00:22:31.116 "name": "nvmf_tgt_poll_group_003", 00:22:31.116 "admin_qpairs": 0, 00:22:31.116 "io_qpairs": 1, 00:22:31.116 "current_admin_qpairs": 0, 00:22:31.116 "current_io_qpairs": 1, 00:22:31.116 "pending_bdev_io": 0, 00:22:31.116 "completed_nvme_io": 20308, 00:22:31.116 "transports": [ 00:22:31.116 { 00:22:31.116 "trtype": "TCP" 00:22:31.116 } 00:22:31.116 ] 00:22:31.116 } 00:22:31.116 ] 00:22:31.116 }' 00:22:31.116 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:31.116 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:31.116 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:31.116 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:31.116 05:17:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 538576 00:22:39.230 Initializing NVMe Controllers 00:22:39.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:39.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:39.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:39.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:39.230 Initialization complete. Launching workers. 00:22:39.230 ======================================================== 00:22:39.230 Latency(us) 00:22:39.230 Device Information : IOPS MiB/s Average min max 00:22:39.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10795.00 42.17 5929.74 2716.27 9074.68 00:22:39.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10727.80 41.91 5967.14 2125.76 10642.03 00:22:39.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11042.50 43.13 5796.30 2034.48 9905.42 00:22:39.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10999.90 42.97 5819.03 1777.26 10231.57 00:22:39.230 ======================================================== 00:22:39.230 Total : 43565.20 170.18 5877.17 1777.26 10642.03 00:22:39.230 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.230 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.231 rmmod nvme_tcp 00:22:39.231 rmmod nvme_fabrics 00:22:39.231 rmmod nvme_keyring 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 538422 ']' 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 538422 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 538422 ']' 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 538422 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 538422 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.489 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 538422' 00:22:39.489 killing process with pid 538422 00:22:39.490 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 538422 00:22:39.490 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 538422 00:22:39.749 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.749 05:17:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.651 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.651 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:41.651 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:41.651 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:43.026 05:17:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:45.559 05:17:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:50.836 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:50.836 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.836 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:50.836 Found net devices under 0000:af:00.0: cvl_0_0 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:50.837 Found net devices under 0000:af:00.1: cvl_0_1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:22:50.837 00:22:50.837 --- 10.0.0.2 ping statistics --- 00:22:50.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.837 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:22:50.837 00:22:50.837 --- 10.0.0.1 ping statistics --- 00:22:50.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.837 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:50.837 net.core.busy_poll = 1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:50.837 net.core.busy_read = 1 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:50.837 05:17:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=542549 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 542549 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 542549 ']' 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.837 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.837 [2024-12-09 05:17:33.208970] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:22:50.837 [2024-12-09 05:17:33.209018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.095 [2024-12-09 05:17:33.306606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.095 [2024-12-09 05:17:33.349004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.095 [2024-12-09 05:17:33.349040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.095 [2024-12-09 05:17:33.349049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.095 [2024-12-09 05:17:33.349058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.095 [2024-12-09 05:17:33.349065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.095 [2024-12-09 05:17:33.350674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.095 [2024-12-09 05:17:33.350709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.095 [2024-12-09 05:17:33.350818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.095 [2024-12-09 05:17:33.350819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 [2024-12-09 05:17:34.239702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 Malloc1 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.932 [2024-12-09 05:17:34.305727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=542833 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:51.932 05:17:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:54.465 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:54.466 "tick_rate": 2500000000, 00:22:54.466 "poll_groups": [ 00:22:54.466 { 00:22:54.466 "name": "nvmf_tgt_poll_group_000", 00:22:54.466 "admin_qpairs": 1, 00:22:54.466 "io_qpairs": 2, 00:22:54.466 "current_admin_qpairs": 1, 00:22:54.466 "current_io_qpairs": 2, 00:22:54.466 "pending_bdev_io": 0, 00:22:54.466 "completed_nvme_io": 29260, 00:22:54.466 "transports": [ 00:22:54.466 { 00:22:54.466 "trtype": "TCP" 00:22:54.466 } 00:22:54.466 ] 00:22:54.466 }, 00:22:54.466 { 00:22:54.466 "name": "nvmf_tgt_poll_group_001", 00:22:54.466 "admin_qpairs": 0, 00:22:54.466 "io_qpairs": 2, 00:22:54.466 "current_admin_qpairs": 0, 00:22:54.466 "current_io_qpairs": 2, 00:22:54.466 "pending_bdev_io": 0, 00:22:54.466 "completed_nvme_io": 28503, 00:22:54.466 "transports": [ 00:22:54.466 { 00:22:54.466 "trtype": "TCP" 00:22:54.466 } 00:22:54.466 ] 00:22:54.466 }, 00:22:54.466 { 00:22:54.466 "name": "nvmf_tgt_poll_group_002", 00:22:54.466 "admin_qpairs": 0, 00:22:54.466 "io_qpairs": 0, 00:22:54.466 "current_admin_qpairs": 0, 00:22:54.466 "current_io_qpairs": 0, 00:22:54.466 "pending_bdev_io": 0, 00:22:54.466 "completed_nvme_io": 0, 00:22:54.466 "transports": [ 00:22:54.466 { 00:22:54.466 "trtype": "TCP" 00:22:54.466 } 00:22:54.466 ] 00:22:54.466 }, 00:22:54.466 { 00:22:54.466 "name": "nvmf_tgt_poll_group_003", 00:22:54.466 "admin_qpairs": 0, 00:22:54.466 "io_qpairs": 0, 00:22:54.466 "current_admin_qpairs": 0, 00:22:54.466 "current_io_qpairs": 0, 00:22:54.466 "pending_bdev_io": 0, 00:22:54.466 "completed_nvme_io": 0, 00:22:54.466 "transports": [ 00:22:54.466 { 00:22:54.466 "trtype": "TCP" 00:22:54.466 } 00:22:54.466 ] 00:22:54.466 } 00:22:54.466 ] 00:22:54.466 }' 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:54.466 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 542833 00:23:02.581 Initializing NVMe Controllers 00:23:02.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:02.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:02.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:02.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:02.582 Initialization complete. Launching workers. 00:23:02.582 ======================================================== 00:23:02.582 Latency(us) 00:23:02.582 Device Information : IOPS MiB/s Average min max 00:23:02.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8535.70 33.34 7518.45 1424.56 52919.59 00:23:02.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7643.50 29.86 8373.31 1379.47 53412.97 00:23:02.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7488.20 29.25 8545.50 1659.47 53398.87 00:23:02.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6903.10 26.97 9269.98 1325.27 53355.84 00:23:02.582 ======================================================== 00:23:02.582 Total : 30570.49 119.42 8379.28 1325.27 53412.97 00:23:02.582 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.582 rmmod nvme_tcp 00:23:02.582 rmmod nvme_fabrics 00:23:02.582 rmmod nvme_keyring 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 542549 ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 542549 ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542549' 00:23:02.582 killing process with pid 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 542549 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.582 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:05.875 00:23:05.875 real 0m54.919s 00:23:05.875 user 2m48.602s 00:23:05.875 sys 0m14.950s 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 ************************************ 00:23:05.875 END TEST nvmf_perf_adq 00:23:05.875 ************************************ 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.875 ************************************ 00:23:05.875 START TEST nvmf_shutdown 00:23:05.875 ************************************ 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:05.875 * Looking for test storage... 00:23:05.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.875 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.876 --rc genhtml_branch_coverage=1 00:23:05.876 --rc genhtml_function_coverage=1 00:23:05.876 --rc genhtml_legend=1 00:23:05.876 --rc geninfo_all_blocks=1 00:23:05.876 --rc geninfo_unexecuted_blocks=1 00:23:05.876 00:23:05.876 ' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.876 --rc genhtml_branch_coverage=1 00:23:05.876 --rc genhtml_function_coverage=1 00:23:05.876 --rc genhtml_legend=1 00:23:05.876 --rc geninfo_all_blocks=1 00:23:05.876 --rc geninfo_unexecuted_blocks=1 00:23:05.876 00:23:05.876 ' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.876 --rc genhtml_branch_coverage=1 00:23:05.876 --rc genhtml_function_coverage=1 00:23:05.876 --rc genhtml_legend=1 00:23:05.876 --rc geninfo_all_blocks=1 00:23:05.876 --rc geninfo_unexecuted_blocks=1 00:23:05.876 00:23:05.876 ' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.876 --rc genhtml_branch_coverage=1 00:23:05.876 --rc genhtml_function_coverage=1 00:23:05.876 --rc genhtml_legend=1 00:23:05.876 --rc geninfo_all_blocks=1 00:23:05.876 --rc geninfo_unexecuted_blocks=1 00:23:05.876 00:23:05.876 ' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.876 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.137 ************************************ 00:23:06.137 START TEST nvmf_shutdown_tc1 00:23:06.137 ************************************ 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.137 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:14.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:14.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:14.270 Found net devices under 0000:af:00.0: cvl_0_0 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.270 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:14.271 Found net devices under 0000:af:00.1: cvl_0_1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:14.271 00:23:14.271 --- 10.0.0.2 ping statistics --- 00:23:14.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.271 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:14.271 00:23:14.271 --- 10.0.0.1 ping statistics --- 00:23:14.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.271 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=548513 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 548513 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 548513 ']' 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.271 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.271 [2024-12-09 05:17:55.840928] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:14.271 [2024-12-09 05:17:55.840980] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.271 [2024-12-09 05:17:55.938508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.271 [2024-12-09 05:17:55.980477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.271 [2024-12-09 05:17:55.980515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.271 [2024-12-09 05:17:55.980525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.271 [2024-12-09 05:17:55.980534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.271 [2024-12-09 05:17:55.980541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.271 [2024-12-09 05:17:55.982184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.271 [2024-12-09 05:17:55.982292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.271 [2024-12-09 05:17:55.982323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.271 [2024-12-09 05:17:55.982325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.271 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.271 [2024-12-09 05:17:56.729683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.531 05:17:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 Malloc1 00:23:14.531 [2024-12-09 05:17:56.860494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.531 Malloc2 00:23:14.531 Malloc3 00:23:14.531 Malloc4 00:23:14.790 Malloc5 00:23:14.790 Malloc6 00:23:14.790 Malloc7 00:23:14.790 Malloc8 00:23:14.790 Malloc9 00:23:14.791 Malloc10 00:23:14.791 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.791 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:14.791 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.791 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.050 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=548825 00:23:15.050 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 548825 /var/tmp/bdevperf.sock 00:23:15.050 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 548825 ']' 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 [2024-12-09 05:17:57.349266] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:15.051 [2024-12-09 05:17:57.349318] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.051 "adrfam": "ipv4", 00:23:15.051 "trsvcid": "$NVMF_PORT", 00:23:15.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.051 "hdgst": ${hdgst:-false}, 00:23:15.051 "ddgst": ${ddgst:-false} 00:23:15.051 }, 00:23:15.051 "method": "bdev_nvme_attach_controller" 00:23:15.051 } 00:23:15.051 EOF 00:23:15.051 )") 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.051 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.051 { 00:23:15.051 "params": { 00:23:15.051 "name": "Nvme$subsystem", 00:23:15.051 "trtype": "$TEST_TRANSPORT", 00:23:15.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "$NVMF_PORT", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.052 "hdgst": ${hdgst:-false}, 00:23:15.052 "ddgst": ${ddgst:-false} 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 } 00:23:15.052 EOF 00:23:15.052 )") 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.052 { 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme$subsystem", 00:23:15.052 "trtype": "$TEST_TRANSPORT", 00:23:15.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "$NVMF_PORT", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.052 "hdgst": ${hdgst:-false}, 00:23:15.052 "ddgst": ${ddgst:-false} 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 } 00:23:15.052 EOF 00:23:15.052 )") 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:15.052 05:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme1", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme2", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme3", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme4", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme5", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme6", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme7", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme8", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme9", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 },{ 00:23:15.052 "params": { 00:23:15.052 "name": "Nvme10", 00:23:15.052 "trtype": "tcp", 00:23:15.052 "traddr": "10.0.0.2", 00:23:15.052 "adrfam": "ipv4", 00:23:15.052 "trsvcid": "4420", 00:23:15.052 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:15.052 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:15.052 "hdgst": false, 00:23:15.052 "ddgst": false 00:23:15.052 }, 00:23:15.052 "method": "bdev_nvme_attach_controller" 00:23:15.052 }' 00:23:15.052 [2024-12-09 05:17:57.446079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.052 [2024-12-09 05:17:57.485144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 548825 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:16.430 05:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:17.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 548825 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 548513 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 [2024-12-09 05:17:59.898055] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:17.808 [2024-12-09 05:17:59.898107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549378 ] 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:17.808 { 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme$subsystem", 00:23:17.808 "trtype": "$TEST_TRANSPORT", 00:23:17.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "$NVMF_PORT", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:17.808 "hdgst": ${hdgst:-false}, 00:23:17.808 "ddgst": ${ddgst:-false} 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 } 00:23:17.808 EOF 00:23:17.808 )") 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:17.808 05:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme1", 00:23:17.808 "trtype": "tcp", 00:23:17.808 "traddr": "10.0.0.2", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "4420", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.808 "hdgst": false, 00:23:17.808 "ddgst": false 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 },{ 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme2", 00:23:17.808 "trtype": "tcp", 00:23:17.808 "traddr": "10.0.0.2", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "4420", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:17.808 "hdgst": false, 00:23:17.808 "ddgst": false 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 },{ 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme3", 00:23:17.808 "trtype": "tcp", 00:23:17.808 "traddr": "10.0.0.2", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "4420", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:17.808 "hdgst": false, 00:23:17.808 "ddgst": false 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 },{ 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme4", 00:23:17.808 "trtype": "tcp", 00:23:17.808 "traddr": "10.0.0.2", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "4420", 00:23:17.808 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:17.808 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:17.808 "hdgst": false, 00:23:17.808 "ddgst": false 00:23:17.808 }, 00:23:17.808 "method": "bdev_nvme_attach_controller" 00:23:17.808 },{ 00:23:17.808 "params": { 00:23:17.808 "name": "Nvme5", 00:23:17.808 "trtype": "tcp", 00:23:17.808 "traddr": "10.0.0.2", 00:23:17.808 "adrfam": "ipv4", 00:23:17.808 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 },{ 00:23:17.809 "params": { 00:23:17.809 "name": "Nvme6", 00:23:17.809 "trtype": "tcp", 00:23:17.809 "traddr": "10.0.0.2", 00:23:17.809 "adrfam": "ipv4", 00:23:17.809 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 },{ 00:23:17.809 "params": { 00:23:17.809 "name": "Nvme7", 00:23:17.809 "trtype": "tcp", 00:23:17.809 "traddr": "10.0.0.2", 00:23:17.809 "adrfam": "ipv4", 00:23:17.809 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 },{ 00:23:17.809 "params": { 00:23:17.809 "name": "Nvme8", 00:23:17.809 "trtype": "tcp", 00:23:17.809 "traddr": "10.0.0.2", 00:23:17.809 "adrfam": "ipv4", 00:23:17.809 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 },{ 00:23:17.809 "params": { 00:23:17.809 "name": "Nvme9", 00:23:17.809 "trtype": "tcp", 00:23:17.809 "traddr": "10.0.0.2", 00:23:17.809 "adrfam": "ipv4", 00:23:17.809 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 },{ 00:23:17.809 "params": { 00:23:17.809 "name": "Nvme10", 00:23:17.809 "trtype": "tcp", 00:23:17.809 "traddr": "10.0.0.2", 00:23:17.809 "adrfam": "ipv4", 00:23:17.809 "trsvcid": "4420", 00:23:17.809 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:17.809 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:17.809 "hdgst": false, 00:23:17.809 "ddgst": false 00:23:17.809 }, 00:23:17.809 "method": "bdev_nvme_attach_controller" 00:23:17.809 }' 00:23:17.809 [2024-12-09 05:17:59.995225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.809 [2024-12-09 05:18:00.039076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.185 Running I/O for 1 seconds... 00:23:20.562 2382.00 IOPS, 148.88 MiB/s 00:23:20.562 Latency(us) 00:23:20.562 [2024-12-09T04:18:03.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.562 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme1n1 : 1.10 291.64 18.23 0.00 0.00 217636.86 21810.38 203004.31 00:23:20.562 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme2n1 : 1.04 246.86 15.43 0.00 0.00 253306.68 17406.36 219781.53 00:23:20.562 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme3n1 : 1.09 292.57 18.29 0.00 0.00 210885.51 14680.06 209715.20 00:23:20.562 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme4n1 : 1.09 299.40 18.71 0.00 0.00 202430.42 10643.05 209715.20 00:23:20.562 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme5n1 : 1.11 289.09 18.07 0.00 0.00 207436.19 15518.92 205520.90 00:23:20.562 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme6n1 : 1.10 290.13 18.13 0.00 0.00 203647.88 15833.50 207198.62 00:23:20.562 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme7n1 : 1.10 289.60 18.10 0.00 0.00 201031.19 15099.49 207198.62 00:23:20.562 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme8n1 : 1.11 291.69 18.23 0.00 0.00 196744.37 1926.76 203843.17 00:23:20.562 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme9n1 : 1.16 331.50 20.72 0.00 0.00 171232.43 4902.09 224814.69 00:23:20.562 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:20.562 Verification LBA range: start 0x0 length 0x400 00:23:20.562 Nvme10n1 : 1.16 330.40 20.65 0.00 0.00 169461.18 6501.17 209715.20 00:23:20.562 [2024-12-09T04:18:03.032Z] =================================================================================================================== 00:23:20.562 [2024-12-09T04:18:03.032Z] Total : 2952.87 184.55 0.00 0.00 201103.56 1926.76 224814.69 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.562 05:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.562 rmmod nvme_tcp 00:23:20.562 rmmod nvme_fabrics 00:23:20.562 rmmod nvme_keyring 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 548513 ']' 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 548513 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 548513 ']' 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 548513 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548513 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548513' 00:23:20.822 killing process with pid 548513 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 548513 00:23:20.822 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 548513 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.081 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.618 00:23:23.618 real 0m17.242s 00:23:23.618 user 0m35.984s 00:23:23.618 sys 0m7.307s 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:23.618 ************************************ 00:23:23.618 END TEST nvmf_shutdown_tc1 00:23:23.618 ************************************ 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:23.618 ************************************ 00:23:23.618 START TEST nvmf_shutdown_tc2 00:23:23.618 ************************************ 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:23.618 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:23.618 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:23.618 Found net devices under 0000:af:00.0: cvl_0_0 00:23:23.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:23.619 Found net devices under 0000:af:00.1: cvl_0_1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:23:23.619 00:23:23.619 --- 10.0.0.2 ping statistics --- 00:23:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.619 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:23:23.619 00:23:23.619 --- 10.0.0.1 ping statistics --- 00:23:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.619 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.619 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=551098 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 551098 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 551098 ']' 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.878 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.878 [2024-12-09 05:18:06.139041] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:23.878 [2024-12-09 05:18:06.139089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.878 [2024-12-09 05:18:06.233810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.878 [2024-12-09 05:18:06.275240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.878 [2024-12-09 05:18:06.275278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.878 [2024-12-09 05:18:06.275288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.878 [2024-12-09 05:18:06.275296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.878 [2024-12-09 05:18:06.275303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.878 [2024-12-09 05:18:06.277121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.878 [2024-12-09 05:18:06.277272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.878 [2024-12-09 05:18:06.277381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.878 [2024-12-09 05:18:06.277382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:24.815 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.815 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:24.815 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.815 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.815 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.815 [2024-12-09 05:18:07.024693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.815 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.815 Malloc1 00:23:24.815 [2024-12-09 05:18:07.144225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.815 Malloc2 00:23:24.815 Malloc3 00:23:24.815 Malloc4 00:23:25.074 Malloc5 00:23:25.074 Malloc6 00:23:25.074 Malloc7 00:23:25.074 Malloc8 00:23:25.074 Malloc9 00:23:25.074 Malloc10 00:23:25.074 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.074 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:25.074 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.074 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=551368 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 551368 /var/tmp/bdevperf.sock 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 551368 ']' 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:25.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.333 "hdgst": ${hdgst:-false}, 00:23:25.333 "ddgst": ${ddgst:-false} 00:23:25.333 }, 00:23:25.333 "method": "bdev_nvme_attach_controller" 00:23:25.333 } 00:23:25.333 EOF 00:23:25.333 )") 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.333 "hdgst": ${hdgst:-false}, 00:23:25.333 "ddgst": ${ddgst:-false} 00:23:25.333 }, 00:23:25.333 "method": "bdev_nvme_attach_controller" 00:23:25.333 } 00:23:25.333 EOF 00:23:25.333 )") 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.333 "hdgst": ${hdgst:-false}, 00:23:25.333 "ddgst": ${ddgst:-false} 00:23:25.333 }, 00:23:25.333 "method": "bdev_nvme_attach_controller" 00:23:25.333 } 00:23:25.333 EOF 00:23:25.333 )") 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.333 "hdgst": ${hdgst:-false}, 00:23:25.333 "ddgst": ${ddgst:-false} 00:23:25.333 }, 00:23:25.333 "method": "bdev_nvme_attach_controller" 00:23:25.333 } 00:23:25.333 EOF 00:23:25.333 )") 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.333 "hdgst": ${hdgst:-false}, 00:23:25.333 "ddgst": ${ddgst:-false} 00:23:25.333 }, 00:23:25.333 "method": "bdev_nvme_attach_controller" 00:23:25.333 } 00:23:25.333 EOF 00:23:25.333 )") 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.333 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.333 { 00:23:25.333 "params": { 00:23:25.333 "name": "Nvme$subsystem", 00:23:25.333 "trtype": "$TEST_TRANSPORT", 00:23:25.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.333 "adrfam": "ipv4", 00:23:25.333 "trsvcid": "$NVMF_PORT", 00:23:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.334 "hdgst": ${hdgst:-false}, 00:23:25.334 "ddgst": ${ddgst:-false} 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 } 00:23:25.334 EOF 00:23:25.334 )") 00:23:25.334 [2024-12-09 05:18:07.627297] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:25.334 [2024-12-09 05:18:07.627345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551368 ] 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.334 { 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme$subsystem", 00:23:25.334 "trtype": "$TEST_TRANSPORT", 00:23:25.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "$NVMF_PORT", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.334 "hdgst": ${hdgst:-false}, 00:23:25.334 "ddgst": ${ddgst:-false} 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 } 00:23:25.334 EOF 00:23:25.334 )") 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.334 { 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme$subsystem", 00:23:25.334 "trtype": "$TEST_TRANSPORT", 00:23:25.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "$NVMF_PORT", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.334 "hdgst": ${hdgst:-false}, 00:23:25.334 "ddgst": ${ddgst:-false} 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 } 00:23:25.334 EOF 00:23:25.334 )") 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.334 { 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme$subsystem", 00:23:25.334 "trtype": "$TEST_TRANSPORT", 00:23:25.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "$NVMF_PORT", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.334 "hdgst": ${hdgst:-false}, 00:23:25.334 "ddgst": ${ddgst:-false} 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 } 00:23:25.334 EOF 00:23:25.334 )") 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:25.334 { 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme$subsystem", 00:23:25.334 "trtype": "$TEST_TRANSPORT", 00:23:25.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "$NVMF_PORT", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.334 "hdgst": ${hdgst:-false}, 00:23:25.334 "ddgst": ${ddgst:-false} 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 } 00:23:25.334 EOF 00:23:25.334 )") 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:25.334 05:18:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme1", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme2", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme3", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme4", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme5", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme6", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme7", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme8", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme9", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 },{ 00:23:25.334 "params": { 00:23:25.334 "name": "Nvme10", 00:23:25.334 "trtype": "tcp", 00:23:25.334 "traddr": "10.0.0.2", 00:23:25.334 "adrfam": "ipv4", 00:23:25.334 "trsvcid": "4420", 00:23:25.334 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:25.334 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:25.334 "hdgst": false, 00:23:25.334 "ddgst": false 00:23:25.334 }, 00:23:25.334 "method": "bdev_nvme_attach_controller" 00:23:25.334 }' 00:23:25.334 [2024-12-09 05:18:07.724230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.334 [2024-12-09 05:18:07.762999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.236 Running I/O for 10 seconds... 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:27.237 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:27.496 05:18:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 551368 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 551368 ']' 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 551368 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551368 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551368' 00:23:27.756 killing process with pid 551368 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 551368 00:23:27.756 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 551368 00:23:27.756 Received shutdown signal, test time was about 0.935385 seconds 00:23:27.756 00:23:27.756 Latency(us) 00:23:27.756 [2024-12-09T04:18:10.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.756 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme1n1 : 0.93 275.98 17.25 0.00 0.00 229651.46 17091.79 214748.36 00:23:27.756 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme2n1 : 0.91 279.90 17.49 0.00 0.00 222548.99 16043.21 206359.76 00:23:27.756 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme3n1 : 0.93 350.09 21.88 0.00 0.00 174504.10 4639.95 202165.45 00:23:27.756 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme4n1 : 0.92 291.30 18.21 0.00 0.00 205092.24 7969.18 206359.76 00:23:27.756 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme5n1 : 0.90 282.98 17.69 0.00 0.00 208086.43 14889.78 204682.04 00:23:27.756 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme6n1 : 0.92 277.61 17.35 0.00 0.00 209458.18 17825.79 211392.92 00:23:27.756 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme7n1 : 0.91 281.10 17.57 0.00 0.00 202681.96 18140.36 208876.34 00:23:27.756 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme8n1 : 0.90 290.84 18.18 0.00 0.00 190564.38 5190.45 196293.43 00:23:27.756 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme9n1 : 0.93 273.87 17.12 0.00 0.00 200643.79 17720.93 219781.53 00:23:27.756 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:27.756 Verification LBA range: start 0x0 length 0x400 00:23:27.756 Nvme10n1 : 0.93 274.16 17.14 0.00 0.00 197279.13 16882.07 234881.02 00:23:27.756 [2024-12-09T04:18:10.226Z] =================================================================================================================== 00:23:27.756 [2024-12-09T04:18:10.226Z] Total : 2877.83 179.86 0.00 0.00 203254.31 4639.95 234881.02 00:23:28.014 05:18:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:28.950 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 551098 00:23:28.950 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:28.950 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.210 rmmod nvme_tcp 00:23:29.210 rmmod nvme_fabrics 00:23:29.210 rmmod nvme_keyring 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 551098 ']' 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 551098 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 551098 ']' 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 551098 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551098 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551098' 00:23:29.210 killing process with pid 551098 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 551098 00:23:29.210 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 551098 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.780 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.689 00:23:31.689 real 0m8.366s 00:23:31.689 user 0m25.361s 00:23:31.689 sys 0m1.692s 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.689 ************************************ 00:23:31.689 END TEST nvmf_shutdown_tc2 00:23:31.689 ************************************ 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.689 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:31.949 ************************************ 00:23:31.949 START TEST nvmf_shutdown_tc3 00:23:31.949 ************************************ 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.949 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:31.950 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:31.950 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:31.950 Found net devices under 0000:af:00.0: cvl_0_0 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:31.950 Found net devices under 0000:af:00.1: cvl_0_1 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.950 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.951 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.951 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:31.951 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:23:32.211 00:23:32.211 --- 10.0.0.2 ping statistics --- 00:23:32.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.211 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:23:32.211 00:23:32.211 --- 10.0.0.1 ping statistics --- 00:23:32.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.211 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=552612 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 552612 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 552612 ']' 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.211 05:18:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:32.211 [2024-12-09 05:18:14.612530] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:32.211 [2024-12-09 05:18:14.612575] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.471 [2024-12-09 05:18:14.709273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.471 [2024-12-09 05:18:14.750482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.471 [2024-12-09 05:18:14.750520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.471 [2024-12-09 05:18:14.750529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.471 [2024-12-09 05:18:14.750537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.471 [2024-12-09 05:18:14.750560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.471 [2024-12-09 05:18:14.752359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.471 [2024-12-09 05:18:14.752470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.471 [2024-12-09 05:18:14.752577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.471 [2024-12-09 05:18:14.752576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.040 [2024-12-09 05:18:15.492087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.040 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.299 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.299 Malloc1 00:23:33.299 [2024-12-09 05:18:15.617258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.299 Malloc2 00:23:33.299 Malloc3 00:23:33.299 Malloc4 00:23:33.299 Malloc5 00:23:33.558 Malloc6 00:23:33.558 Malloc7 00:23:33.558 Malloc8 00:23:33.558 Malloc9 00:23:33.558 Malloc10 00:23:33.558 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.558 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:33.558 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.558 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=552924 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 552924 /var/tmp/bdevperf.sock 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 552924 ']' 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.817 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 [2024-12-09 05:18:16.107172] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:33.818 [2024-12-09 05:18:16.107228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552924 ] 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.818 { 00:23:33.818 "params": { 00:23:33.818 "name": "Nvme$subsystem", 00:23:33.818 "trtype": "$TEST_TRANSPORT", 00:23:33.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.818 "adrfam": "ipv4", 00:23:33.818 "trsvcid": "$NVMF_PORT", 00:23:33.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.818 "hdgst": ${hdgst:-false}, 00:23:33.818 "ddgst": ${ddgst:-false} 00:23:33.818 }, 00:23:33.818 "method": "bdev_nvme_attach_controller" 00:23:33.818 } 00:23:33.818 EOF 00:23:33.818 )") 00:23:33.818 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.819 { 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme$subsystem", 00:23:33.819 "trtype": "$TEST_TRANSPORT", 00:23:33.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "$NVMF_PORT", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.819 "hdgst": ${hdgst:-false}, 00:23:33.819 "ddgst": ${ddgst:-false} 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 } 00:23:33.819 EOF 00:23:33.819 )") 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:33.819 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme1", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme2", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme3", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme4", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme5", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme6", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme7", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme8", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme9", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 },{ 00:23:33.819 "params": { 00:23:33.819 "name": "Nvme10", 00:23:33.819 "trtype": "tcp", 00:23:33.819 "traddr": "10.0.0.2", 00:23:33.819 "adrfam": "ipv4", 00:23:33.819 "trsvcid": "4420", 00:23:33.819 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.819 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.819 "hdgst": false, 00:23:33.819 "ddgst": false 00:23:33.819 }, 00:23:33.819 "method": "bdev_nvme_attach_controller" 00:23:33.819 }' 00:23:33.819 [2024-12-09 05:18:16.202198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.819 [2024-12-09 05:18:16.240921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.197 Running I/O for 10 seconds... 00:23:35.197 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.197 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:35.197 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:35.197 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.197 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:35.456 05:18:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:35.715 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 552612 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 552612 ']' 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 552612 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552612 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.975 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.976 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552612' 00:23:35.976 killing process with pid 552612 00:23:35.976 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 552612 00:23:35.976 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 552612 00:23:35.976 [2024-12-09 05:18:18.442644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.442999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:35.976 [2024-12-09 05:18:18.443262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88c8e0 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.251 [2024-12-09 05:18:18.444614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.444962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88f490 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.446622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88cdb0 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.447269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3390 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.447433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.252 [2024-12-09 05:18:18.447505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.447514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b780 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.252 [2024-12-09 05:18:18.449875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 05:18:18.449886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 he state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.449903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:1he state(6) to be set 00:23:36.252 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.252 [2024-12-09 05:18:18.449914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.252 [2024-12-09 05:18:18.449919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.252 [2024-12-09 05:18:18.449924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.449933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.449943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:1he state(6) to be set 00:23:36.253 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.449963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.449964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:36.253 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.449973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.449982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.449992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.449997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:1[2024-12-09 05:18:18.450039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 he state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:1[2024-12-09 05:18:18.450063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 he state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 05:18:18.450073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 he state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.450085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:1he state(6) to be set 00:23:36.253 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.450096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:36.253 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:1[2024-12-09 05:18:18.450172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 he state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 05:18:18.450183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 he state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.450196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128he state(6) to be set 00:23:36.253 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with t[2024-12-09 05:18:18.450309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:1he state(6) to be set 00:23:36.253 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88d770 is same with the state(6) to be set 00:23:36.253 [2024-12-09 05:18:18.450361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.253 [2024-12-09 05:18:18.450915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.253 [2024-12-09 05:18:18.450923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.450933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.450942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.450952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.450961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.450971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.450980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.450990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.450998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with t[2024-12-09 05:18:18.451097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:36.254 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.254 [2024-12-09 05:18:18.451173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.254 [2024-12-09 05:18:18.451183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.451667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88dc40 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.452999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.453007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.254 [2024-12-09 05:18:18.453016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.453170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e130 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.454915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88e600 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455276] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.255 [2024-12-09 05:18:18.455306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:36.255 [2024-12-09 05:18:18.455350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195a8e0 (9): Bad file descriptor 00:23:36.255 [2024-12-09 05:18:18.455397] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.255 [2024-12-09 05:18:18.455441] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.255 [2024-12-09 05:18:18.455804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.455993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.255 [2024-12-09 05:18:18.456885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195a8e0 with addr=10.0.0.2, port=4420 00:23:36.255 [2024-12-09 05:18:18.456897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195a8e0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.456965] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.255 [2024-12-09 05:18:18.457399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195a8e0 (9): Bad file descriptor 00:23:36.255 [2024-12-09 05:18:18.457439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.255 [2024-12-09 05:18:18.457452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.255 [2024-12-09 05:18:18.457463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.255 [2024-12-09 05:18:18.457472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.255 [2024-12-09 05:18:18.457482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.255 [2024-12-09 05:18:18.457492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.255 [2024-12-09 05:18:18.457501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.255 [2024-12-09 05:18:18.457510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.255 [2024-12-09 05:18:18.457519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f7c0 is same with the state(6) to be set 00:23:36.255 [2024-12-09 05:18:18.457550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.255 [2024-12-09 05:18:18.457560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.255 [2024-12-09 05:18:18.457570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b2f0 is same with the state(6) to be set 00:23:36.256 [2024-12-09 05:18:18.457645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3390 (9): Bad file descriptor 00:23:36.256 [2024-12-09 05:18:18.457675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db37c0 is same with the state(6) to be set 00:23:36.256 [2024-12-09 05:18:18.457779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dad150 is same with the state(6) to be set 00:23:36.256 [2024-12-09 05:18:18.457879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.457950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.457959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f5c0 is same with the state(6) to be set 00:23:36.256 [2024-12-09 05:18:18.457976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b780 (9): Bad file descriptor 00:23:36.256 [2024-12-09 05:18:18.458012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.458022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.458041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.458059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.256 [2024-12-09 05:18:18.458077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4110 is same with the state(6) to be set 00:23:36.256 [2024-12-09 05:18:18.458180] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.256 [2024-12-09 05:18:18.458223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.458990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.458999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.459009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.459018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.459028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.459037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.459048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.459058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.459069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.256 [2024-12-09 05:18:18.459077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.256 [2024-12-09 05:18:18.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.459443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.459452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.467610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.467781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ead0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.472368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.472383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.472396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d61ce0 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.472892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:36.257 [2024-12-09 05:18:18.472915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:36.257 [2024-12-09 05:18:18.472929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:36.257 [2024-12-09 05:18:18.472941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:36.257 [2024-12-09 05:18:18.472977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f7c0 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.473000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b2f0 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.473026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db37c0 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.473049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad150 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.473071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5c0 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.473113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.257 [2024-12-09 05:18:18.473127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.473140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.257 [2024-12-09 05:18:18.473152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.473164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.257 [2024-12-09 05:18:18.473175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.473187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.257 [2024-12-09 05:18:18.473198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.473219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2890 is same with the state(6) to be set 00:23:36.257 [2024-12-09 05:18:18.473241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c4110 (9): Bad file descriptor 00:23:36.257 [2024-12-09 05:18:18.474399] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:36.257 [2024-12-09 05:18:18.474532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.474978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.474990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.257 [2024-12-09 05:18:18.475234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.257 [2024-12-09 05:18:18.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.475984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.475998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:36.258 [2024-12-09 05:18:18.476388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.476972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.476987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.258 [2024-12-09 05:18:18.477506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.258 [2024-12-09 05:18:18.477526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.477978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.477993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.478449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.478465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5f6b0 is same with the state(6) to be set 00:23:36.259 [2024-12-09 05:18:18.481549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.481972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.481989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.259 [2024-12-09 05:18:18.482671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.259 [2024-12-09 05:18:18.482685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.482971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.482989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.483619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.483635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba03b0 is same with the state(6) to be set 00:23:36.260 [2024-12-09 05:18:18.485148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:36.260 [2024-12-09 05:18:18.485177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:36.260 [2024-12-09 05:18:18.485360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.260 [2024-12-09 05:18:18.485386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c4110 with addr=10.0.0.2, port=4420 00:23:36.260 [2024-12-09 05:18:18.485402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4110 is same with the state(6) to be set 00:23:36.260 [2024-12-09 05:18:18.485458] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:36.260 [2024-12-09 05:18:18.485508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2890 (9): Bad file descriptor 00:23:36.260 [2024-12-09 05:18:18.485924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:36.260 [2024-12-09 05:18:18.486136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.260 [2024-12-09 05:18:18.486153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195a8e0 with addr=10.0.0.2, port=4420 00:23:36.260 [2024-12-09 05:18:18.486164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195a8e0 is same with the state(6) to be set 00:23:36.260 [2024-12-09 05:18:18.486318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.260 [2024-12-09 05:18:18.486332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195b780 with addr=10.0.0.2, port=4420 00:23:36.260 [2024-12-09 05:18:18.486342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b780 is same with the state(6) to be set 00:23:36.260 [2024-12-09 05:18:18.486358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c4110 (9): Bad file descriptor 00:23:36.260 [2024-12-09 05:18:18.486634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.486983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.486993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.260 [2024-12-09 05:18:18.487336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.260 [2024-12-09 05:18:18.487345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.487983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.487995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.488005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.488015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b606a0 is same with the state(6) to be set 00:23:36.261 [2024-12-09 05:18:18.489047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.261 [2024-12-09 05:18:18.489866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.261 [2024-12-09 05:18:18.489878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.489899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.489920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.489942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.489964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.489985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.489995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.490434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.490445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f810 is same with the state(6) to be set 00:23:36.262 [2024-12-09 05:18:18.491477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.491980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.491990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.262 [2024-12-09 05:18:18.492284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.262 [2024-12-09 05:18:18.492294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.492853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.492863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60a40 is same with the state(6) to be set 00:23:36.263 [2024-12-09 05:18:18.493905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.493923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.493936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.493947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.493958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.493968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.493980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.493990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.263 [2024-12-09 05:18:18.494839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.263 [2024-12-09 05:18:18.494851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.494981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.494991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.495300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.495310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d62ff0 is same with the state(6) to be set 00:23:36.264 [2024-12-09 05:18:18.496336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.496987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.496997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.264 [2024-12-09 05:18:18.497175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.264 [2024-12-09 05:18:18.497186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.497610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.265 [2024-12-09 05:18:18.497619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-12-09 05:18:18.498986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.499010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.499023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.499035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.499246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.499264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3390 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.499274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3390 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.499288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195a8e0 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.499303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b780 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.499314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.499323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.499334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.499344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.499375] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.499388] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.499403] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.499415] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.499426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3390 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.499715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:36.265 task offset: 33920 on job bdev=Nvme3n1 fails 00:23:36.265 00:23:36.265 Latency(us) 00:23:36.265 [2024-12-09T04:18:18.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.265 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme1n1 ended in about 0.92 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme1n1 : 0.92 209.72 13.11 69.91 0.00 226695.37 16043.21 212231.78 00:23:36.265 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme2n1 ended in about 0.92 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme2n1 : 0.92 213.00 13.31 69.20 0.00 220956.96 5714.74 205520.90 00:23:36.265 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme3n1 ended in about 0.89 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme3n1 : 0.89 287.46 17.97 71.86 0.00 170314.30 16357.79 203004.31 00:23:36.265 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme4n1 ended in about 0.93 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme4n1 : 0.93 207.05 12.94 69.02 0.00 218372.71 13631.49 226492.42 00:23:36.265 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme5n1 ended in about 0.93 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme5n1 : 0.93 206.51 12.91 68.84 0.00 215190.53 17406.36 208037.48 00:23:36.265 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme6n1 : 0.91 216.43 13.53 70.31 0.00 202565.20 20656.95 202165.45 00:23:36.265 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme7n1 ended in about 0.93 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme7n1 : 0.93 210.27 13.14 68.66 0.00 205060.33 25899.83 194615.71 00:23:36.265 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme8n1 ended in about 0.93 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme8n1 : 0.93 205.47 12.84 68.49 0.00 205134.85 14470.35 216426.09 00:23:36.265 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme9n1 ended in about 0.92 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme9n1 : 0.92 209.36 13.09 69.79 0.00 197095.53 7549.75 233203.30 00:23:36.265 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:36.265 Job: Nvme10n1 ended in about 0.92 seconds with error 00:23:36.265 Verification LBA range: start 0x0 length 0x400 00:23:36.265 Nvme10n1 : 0.92 215.07 13.44 69.52 0.00 189804.86 8336.18 213070.64 00:23:36.265 [2024-12-09T04:18:18.735Z] =================================================================================================================== 00:23:36.265 [2024-12-09T04:18:18.735Z] Total : 2180.35 136.27 695.59 0.00 204266.87 5714.74 233203.30 00:23:36.265 [2024-12-09 05:18:18.527682] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:36.265 [2024-12-09 05:18:18.527735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.528049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.528069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db2890 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.528082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2890 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.528258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.528271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194f7c0 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.528280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f7c0 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.528410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.528422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194f5c0 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.528432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194f5c0 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.528652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.528664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195b2f0 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.528674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b2f0 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.528686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.528695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.528706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.528717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.528728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.528736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.528745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.528753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.529947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:36.265 [2024-12-09 05:18:18.530144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.530165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dad150 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.530175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dad150 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.530385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.530399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db37c0 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.530408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db37c0 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.530423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2890 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f7c0 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f5c0 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b2f0 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.530478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.530488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.530497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.530546] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.530560] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.530572] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.530587] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:36.265 [2024-12-09 05:18:18.530843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.265 [2024-12-09 05:18:18.530857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c4110 with addr=10.0.0.2, port=4420 00:23:36.265 [2024-12-09 05:18:18.530866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c4110 is same with the state(6) to be set 00:23:36.265 [2024-12-09 05:18:18.530877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dad150 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db37c0 (9): Bad file descriptor 00:23:36.265 [2024-12-09 05:18:18.530899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.530907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.530916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.530924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.530933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.530942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:36.265 [2024-12-09 05:18:18.530953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:36.265 [2024-12-09 05:18:18.530961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:36.265 [2024-12-09 05:18:18.530970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:36.265 [2024-12-09 05:18:18.530978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.530987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.530995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.531003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.531011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.531019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.531027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.531837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:36.266 [2024-12-09 05:18:18.531860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:36.266 [2024-12-09 05:18:18.531870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:36.266 [2024-12-09 05:18:18.531900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c4110 (9): Bad file descriptor 00:23:36.266 [2024-12-09 05:18:18.531912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.531920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.531929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.531937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.531946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.531955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.531963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.531971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.532220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.266 [2024-12-09 05:18:18.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195b780 with addr=10.0.0.2, port=4420 00:23:36.266 [2024-12-09 05:18:18.532246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195b780 is same with the state(6) to be set 00:23:36.266 [2024-12-09 05:18:18.532318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.266 [2024-12-09 05:18:18.532330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195a8e0 with addr=10.0.0.2, port=4420 00:23:36.266 [2024-12-09 05:18:18.532339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195a8e0 is same with the state(6) to be set 00:23:36.266 [2024-12-09 05:18:18.532422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.266 [2024-12-09 05:18:18.532436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3390 with addr=10.0.0.2, port=4420 00:23:36.266 [2024-12-09 05:18:18.532445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3390 is same with the state(6) to be set 00:23:36.266 [2024-12-09 05:18:18.532454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.532462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.532471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.532479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.532507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195b780 (9): Bad file descriptor 00:23:36.266 [2024-12-09 05:18:18.532520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195a8e0 (9): Bad file descriptor 00:23:36.266 [2024-12-09 05:18:18.532530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3390 (9): Bad file descriptor 00:23:36.266 [2024-12-09 05:18:18.532558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.532567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.532576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.532583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.532593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.532601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.532609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.532617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:36.266 [2024-12-09 05:18:18.532625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:36.266 [2024-12-09 05:18:18.532634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:36.266 [2024-12-09 05:18:18.532642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:36.266 [2024-12-09 05:18:18.532650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:36.525 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 552924 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 552924 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 552924 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:37.462 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.721 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.721 rmmod nvme_tcp 00:23:37.721 rmmod nvme_fabrics 00:23:37.721 rmmod nvme_keyring 00:23:37.721 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.721 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:37.721 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 552612 ']' 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 552612 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 552612 ']' 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 552612 00:23:37.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (552612) - No such process 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 552612 is not found' 00:23:37.721 Process with pid 552612 is not found 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:37.721 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:37.722 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.722 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:37.722 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.722 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.722 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.625 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.885 00:23:39.885 real 0m7.928s 00:23:39.885 user 0m18.974s 00:23:39.885 sys 0m1.618s 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.885 ************************************ 00:23:39.885 END TEST nvmf_shutdown_tc3 00:23:39.885 ************************************ 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:39.885 ************************************ 00:23:39.885 START TEST nvmf_shutdown_tc4 00:23:39.885 ************************************ 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:39.885 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:39.885 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.885 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:39.886 Found net devices under 0000:af:00.0: cvl_0_0 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:39.886 Found net devices under 0000:af:00.1: cvl_0_1 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.886 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:23:40.146 00:23:40.146 --- 10.0.0.2 ping statistics --- 00:23:40.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.146 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:23:40.146 00:23:40.146 --- 10.0.0.1 ping statistics --- 00:23:40.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.146 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=554117 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 554117 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 554117 ']' 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.146 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 [2024-12-09 05:18:22.610951] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:40.146 [2024-12-09 05:18:22.610999] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.405 [2024-12-09 05:18:22.709541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.405 [2024-12-09 05:18:22.751969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.405 [2024-12-09 05:18:22.752007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.405 [2024-12-09 05:18:22.752017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.405 [2024-12-09 05:18:22.752028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.405 [2024-12-09 05:18:22.752035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.405 [2024-12-09 05:18:22.753828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.405 [2024-12-09 05:18:22.753939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.405 [2024-12-09 05:18:22.754046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.405 [2024-12-09 05:18:22.754048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:41.351 [2024-12-09 05:18:23.505293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.351 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:41.351 Malloc1 00:23:41.351 [2024-12-09 05:18:23.630245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.351 Malloc2 00:23:41.351 Malloc3 00:23:41.351 Malloc4 00:23:41.351 Malloc5 00:23:41.610 Malloc6 00:23:41.610 Malloc7 00:23:41.610 Malloc8 00:23:41.610 Malloc9 00:23:41.610 Malloc10 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=554424 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:41.610 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:41.869 [2024-12-09 05:18:24.146363] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 554117 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 554117 ']' 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 554117 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 554117 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 554117' 00:23:47.145 killing process with pid 554117 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 554117 00:23:47.145 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 554117 00:23:47.145 [2024-12-09 05:18:29.153844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 [2024-12-09 05:18:29.153899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 [2024-12-09 05:18:29.153910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 [2024-12-09 05:18:29.153919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 [2024-12-09 05:18:29.153928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 [2024-12-09 05:18:29.153937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f93ac0 is same with the state(6) to be set 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 [2024-12-09 05:18:29.158533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 [2024-12-09 05:18:29.159457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.145 Write completed with error (sct=0, sc=8) 00:23:47.145 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 [2024-12-09 05:18:29.160462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 [2024-12-09 05:18:29.161265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with the state(6) to be set 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 [2024-12-09 05:18:29.161293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with starting I/O failed: -6 00:23:47.146 the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with the state(6) to be set 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 [2024-12-09 05:18:29.161314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with starting I/O failed: -6 00:23:47.146 the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with the state(6) to be set 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 [2024-12-09 05:18:29.161332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc1e0 is same with the state(6) to be set 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 [2024-12-09 05:18:29.161637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with starting I/O failed: -6 00:23:47.146 the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with Write completed with error (sct=0, sc=8) 00:23:47.146 the state(6) to be set 00:23:47.146 starting I/O failed: -6 00:23:47.146 [2024-12-09 05:18:29.161674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 [2024-12-09 05:18:29.161717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efc6b0 is same with the state(6) to be set 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.146 Write completed with error (sct=0, sc=8) 00:23:47.146 starting I/O failed: -6 00:23:47.147 [2024-12-09 05:18:29.162024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.147 NVMe io qpair process completion error 00:23:47.147 [2024-12-09 05:18:29.162084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efcb80 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efcb80 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efcb80 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efcb80 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efcb80 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.162476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efbd10 is same with the state(6) to be set 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 [2024-12-09 05:18:29.163068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 [2024-12-09 05:18:29.163964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.147 starting I/O failed: -6 00:23:47.147 starting I/O failed: -6 00:23:47.147 starting I/O failed: -6 00:23:47.147 [2024-12-09 05:18:29.164251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.164266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with starting I/O failed: -6 00:23:47.147 the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.164277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.164286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.164295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with the state(6) to be set 00:23:47.147 [2024-12-09 05:18:29.164303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efaea0 is same with the state(6) to be set 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.147 Write completed with error (sct=0, sc=8) 00:23:47.147 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.165043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.165062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 starting I/O failed: -6 00:23:47.148 [2024-12-09 05:18:29.165071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.165081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 starting I/O failed: -6 00:23:47.148 [2024-12-09 05:18:29.165089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.165098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efa9d0 is same with the state(6) to be set 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.165142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 [2024-12-09 05:18:29.166994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.148 NVMe io qpair process completion error 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 starting I/O failed: -6 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 Write completed with error (sct=0, sc=8) 00:23:47.148 [2024-12-09 05:18:29.168173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 [2024-12-09 05:18:29.169088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 [2024-12-09 05:18:29.170104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.149 starting I/O failed: -6 00:23:47.149 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 [2024-12-09 05:18:29.171973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.150 NVMe io qpair process completion error 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 [2024-12-09 05:18:29.173001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 [2024-12-09 05:18:29.173933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.150 Write completed with error (sct=0, sc=8) 00:23:47.150 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 [2024-12-09 05:18:29.174931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 [2024-12-09 05:18:29.176638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.151 NVMe io qpair process completion error 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 starting I/O failed: -6 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.151 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 [2024-12-09 05:18:29.177601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 [2024-12-09 05:18:29.178488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 [2024-12-09 05:18:29.179515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.152 starting I/O failed: -6 00:23:47.152 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 [2024-12-09 05:18:29.182689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.153 NVMe io qpair process completion error 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 [2024-12-09 05:18:29.183669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 Write completed with error (sct=0, sc=8) 00:23:47.153 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 [2024-12-09 05:18:29.184573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 [2024-12-09 05:18:29.185589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.154 Write completed with error (sct=0, sc=8) 00:23:47.154 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 [2024-12-09 05:18:29.187892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.155 NVMe io qpair process completion error 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 [2024-12-09 05:18:29.188963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 [2024-12-09 05:18:29.189856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.155 Write completed with error (sct=0, sc=8) 00:23:47.155 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 [2024-12-09 05:18:29.190864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 [2024-12-09 05:18:29.192768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.156 NVMe io qpair process completion error 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 [2024-12-09 05:18:29.193816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.156 starting I/O failed: -6 00:23:47.156 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 [2024-12-09 05:18:29.194694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 [2024-12-09 05:18:29.195695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.157 starting I/O failed: -6 00:23:47.157 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 [2024-12-09 05:18:29.197545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.158 NVMe io qpair process completion error 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 [2024-12-09 05:18:29.198543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 [2024-12-09 05:18:29.199457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.158 starting I/O failed: -6 00:23:47.158 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 [2024-12-09 05:18:29.200429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 [2024-12-09 05:18:29.203217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.159 NVMe io qpair process completion error 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.159 starting I/O failed: -6 00:23:47.159 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 [2024-12-09 05:18:29.205136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 [2024-12-09 05:18:29.206122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.160 Write completed with error (sct=0, sc=8) 00:23:47.160 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 Write completed with error (sct=0, sc=8) 00:23:47.161 starting I/O failed: -6 00:23:47.161 [2024-12-09 05:18:29.209165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.161 NVMe io qpair process completion error 00:23:47.161 Initializing NVMe Controllers 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:47.161 Controller IO queue size 128, less than required. 00:23:47.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:47.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:47.161 Initialization complete. Launching workers. 00:23:47.161 ======================================================== 00:23:47.161 Latency(us) 00:23:47.161 Device Information : IOPS MiB/s Average min max 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2282.90 98.09 56074.58 918.88 106685.91 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2293.06 98.53 55847.95 886.33 105157.27 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2296.74 98.69 55774.92 824.23 106070.88 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2283.54 98.12 56112.60 689.02 108750.28 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2288.52 98.33 56016.91 489.84 102270.72 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2306.91 99.12 54980.44 969.61 101207.44 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2257.59 97.01 56195.89 771.32 99424.68 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2275.76 97.79 55765.63 885.57 91671.70 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2299.55 98.81 55203.72 884.61 97226.16 00:23:47.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2273.59 97.69 55847.10 887.60 96699.24 00:23:47.161 ======================================================== 00:23:47.161 Total : 22858.15 982.19 55780.32 489.84 108750.28 00:23:47.161 00:23:47.161 [2024-12-09 05:18:29.212553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8740 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7560 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8410 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7ef0 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7890 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6870 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f99c0 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f7bc0 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8a70 is same with the state(6) to be set 00:23:47.161 [2024-12-09 05:18:29.212856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f9690 is same with the state(6) to be set 00:23:47.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:47.161 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:48.100 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 554424 00:23:48.100 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:48.100 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 554424 00:23:48.100 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 554424 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.359 rmmod nvme_tcp 00:23:48.359 rmmod nvme_fabrics 00:23:48.359 rmmod nvme_keyring 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 554117 ']' 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 554117 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 554117 ']' 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 554117 00:23:48.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (554117) - No such process 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 554117 is not found' 00:23:48.359 Process with pid 554117 is not found 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.359 05:18:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.894 00:23:50.894 real 0m10.564s 00:23:50.894 user 0m27.782s 00:23:50.894 sys 0m5.355s 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:50.894 ************************************ 00:23:50.894 END TEST nvmf_shutdown_tc4 00:23:50.894 ************************************ 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:50.894 00:23:50.894 real 0m44.698s 00:23:50.894 user 1m48.363s 00:23:50.894 sys 0m16.356s 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:50.894 ************************************ 00:23:50.894 END TEST nvmf_shutdown 00:23:50.894 ************************************ 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:50.894 ************************************ 00:23:50.894 START TEST nvmf_nsid 00:23:50.894 ************************************ 00:23:50.894 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:50.894 * Looking for test storage... 00:23:50.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.894 --rc genhtml_branch_coverage=1 00:23:50.894 --rc genhtml_function_coverage=1 00:23:50.894 --rc genhtml_legend=1 00:23:50.894 --rc geninfo_all_blocks=1 00:23:50.894 --rc geninfo_unexecuted_blocks=1 00:23:50.894 00:23:50.894 ' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.894 --rc genhtml_branch_coverage=1 00:23:50.894 --rc genhtml_function_coverage=1 00:23:50.894 --rc genhtml_legend=1 00:23:50.894 --rc geninfo_all_blocks=1 00:23:50.894 --rc geninfo_unexecuted_blocks=1 00:23:50.894 00:23:50.894 ' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.894 --rc genhtml_branch_coverage=1 00:23:50.894 --rc genhtml_function_coverage=1 00:23:50.894 --rc genhtml_legend=1 00:23:50.894 --rc geninfo_all_blocks=1 00:23:50.894 --rc geninfo_unexecuted_blocks=1 00:23:50.894 00:23:50.894 ' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.894 --rc genhtml_branch_coverage=1 00:23:50.894 --rc genhtml_function_coverage=1 00:23:50.894 --rc genhtml_legend=1 00:23:50.894 --rc geninfo_all_blocks=1 00:23:50.894 --rc geninfo_unexecuted_blocks=1 00:23:50.894 00:23:50.894 ' 00:23:50.894 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.895 05:18:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.022 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.022 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.022 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.022 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.022 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:23:59.023 00:23:59.023 --- 10.0.0.2 ping statistics --- 00:23:59.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.023 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:59.023 00:23:59.023 --- 10.0.0.1 ping statistics --- 00:23:59.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.023 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=559205 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 559205 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 559205 ']' 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.023 05:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.023 [2024-12-09 05:18:40.486555] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:59.023 [2024-12-09 05:18:40.486602] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.023 [2024-12-09 05:18:40.586308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.023 [2024-12-09 05:18:40.626976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.023 [2024-12-09 05:18:40.627015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.023 [2024-12-09 05:18:40.627025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.023 [2024-12-09 05:18:40.627033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.023 [2024-12-09 05:18:40.627056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.023 [2024-12-09 05:18:40.627670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=559410 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=6be92c46-dd41-44e7-99a3-0c51d7c14c87 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=114667c7-c99a-4c5e-81a9-3b2ee02ee83c 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=65a171c2-33d9-4bdc-b3a9-9349bfaeb8d9 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.023 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.023 null0 00:23:59.023 null1 00:23:59.023 [2024-12-09 05:18:41.429872] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:23:59.023 [2024-12-09 05:18:41.429922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559410 ] 00:23:59.023 null2 00:23:59.023 [2024-12-09 05:18:41.439185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.023 [2024-12-09 05:18:41.463420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 559410 /var/tmp/tgt2.sock 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 559410 ']' 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.283 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:59.283 [2024-12-09 05:18:41.525574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.283 [2024-12-09 05:18:41.564344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.542 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.542 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:59.542 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:59.800 [2024-12-09 05:18:42.086278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.800 [2024-12-09 05:18:42.102403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:59.800 nvme0n1 nvme0n2 00:23:59.800 nvme1n1 00:23:59.800 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:59.800 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:59.800 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:01.178 05:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 6be92c46-dd41-44e7-99a3-0c51d7c14c87 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6be92c46dd4144e799a30c51d7c14c87 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6BE92C46DD4144E799A30C51D7C14C87 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 6BE92C46DD4144E799A30C51D7C14C87 == \6\B\E\9\2\C\4\6\D\D\4\1\4\4\E\7\9\9\A\3\0\C\5\1\D\7\C\1\4\C\8\7 ]] 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 114667c7-c99a-4c5e-81a9-3b2ee02ee83c 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=114667c7c99a4c5e81a93b2ee02ee83c 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 114667C7C99A4C5E81A93B2EE02EE83C 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 114667C7C99A4C5E81A93B2EE02EE83C == \1\1\4\6\6\7\C\7\C\9\9\A\4\C\5\E\8\1\A\9\3\B\2\E\E\0\2\E\E\8\3\C ]] 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:02.116 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 65a171c2-33d9-4bdc-b3a9-9349bfaeb8d9 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=65a171c233d94bdcb3a99349bfaeb8d9 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 65A171C233D94BDCB3A99349BFAEB8D9 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 65A171C233D94BDCB3A99349BFAEB8D9 == \6\5\A\1\7\1\C\2\3\3\D\9\4\B\D\C\B\3\A\9\9\3\4\9\B\F\A\E\B\8\D\9 ]] 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 559410 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 559410 ']' 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 559410 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:02.376 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.635 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559410 00:24:02.636 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.636 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.636 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559410' 00:24:02.636 killing process with pid 559410 00:24:02.636 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 559410 00:24:02.636 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 559410 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.894 rmmod nvme_tcp 00:24:02.894 rmmod nvme_fabrics 00:24:02.894 rmmod nvme_keyring 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 559205 ']' 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 559205 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 559205 ']' 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 559205 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.894 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559205 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559205' 00:24:03.153 killing process with pid 559205 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 559205 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 559205 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.153 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:05.739 00:24:05.739 real 0m14.774s 00:24:05.739 user 0m11.188s 00:24:05.739 sys 0m6.988s 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:05.739 ************************************ 00:24:05.739 END TEST nvmf_nsid 00:24:05.739 ************************************ 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:05.739 00:24:05.739 real 13m5.491s 00:24:05.739 user 27m2.885s 00:24:05.739 sys 4m27.432s 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.739 05:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:05.739 ************************************ 00:24:05.739 END TEST nvmf_target_extra 00:24:05.739 ************************************ 00:24:05.740 05:18:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:05.740 05:18:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.740 05:18:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.740 05:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.740 ************************************ 00:24:05.740 START TEST nvmf_host 00:24:05.740 ************************************ 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:05.740 * Looking for test storage... 00:24:05.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:05.740 05:18:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.740 --rc genhtml_branch_coverage=1 00:24:05.740 --rc genhtml_function_coverage=1 00:24:05.740 --rc genhtml_legend=1 00:24:05.740 --rc geninfo_all_blocks=1 00:24:05.740 --rc geninfo_unexecuted_blocks=1 00:24:05.740 00:24:05.740 ' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.740 --rc genhtml_branch_coverage=1 00:24:05.740 --rc genhtml_function_coverage=1 00:24:05.740 --rc genhtml_legend=1 00:24:05.740 --rc geninfo_all_blocks=1 00:24:05.740 --rc geninfo_unexecuted_blocks=1 00:24:05.740 00:24:05.740 ' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.740 --rc genhtml_branch_coverage=1 00:24:05.740 --rc genhtml_function_coverage=1 00:24:05.740 --rc genhtml_legend=1 00:24:05.740 --rc geninfo_all_blocks=1 00:24:05.740 --rc geninfo_unexecuted_blocks=1 00:24:05.740 00:24:05.740 ' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.740 --rc genhtml_branch_coverage=1 00:24:05.740 --rc genhtml_function_coverage=1 00:24:05.740 --rc genhtml_legend=1 00:24:05.740 --rc geninfo_all_blocks=1 00:24:05.740 --rc geninfo_unexecuted_blocks=1 00:24:05.740 00:24:05.740 ' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.740 ************************************ 00:24:05.740 START TEST nvmf_multicontroller 00:24:05.740 ************************************ 00:24:05.740 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:05.740 * Looking for test storage... 00:24:05.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.741 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:05.741 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:05.741 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.999 --rc genhtml_branch_coverage=1 00:24:05.999 --rc genhtml_function_coverage=1 00:24:05.999 --rc genhtml_legend=1 00:24:05.999 --rc geninfo_all_blocks=1 00:24:05.999 --rc geninfo_unexecuted_blocks=1 00:24:05.999 00:24:05.999 ' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.999 --rc genhtml_branch_coverage=1 00:24:05.999 --rc genhtml_function_coverage=1 00:24:05.999 --rc genhtml_legend=1 00:24:05.999 --rc geninfo_all_blocks=1 00:24:05.999 --rc geninfo_unexecuted_blocks=1 00:24:05.999 00:24:05.999 ' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.999 --rc genhtml_branch_coverage=1 00:24:05.999 --rc genhtml_function_coverage=1 00:24:05.999 --rc genhtml_legend=1 00:24:05.999 --rc geninfo_all_blocks=1 00:24:05.999 --rc geninfo_unexecuted_blocks=1 00:24:05.999 00:24:05.999 ' 00:24:05.999 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.999 --rc genhtml_branch_coverage=1 00:24:05.999 --rc genhtml_function_coverage=1 00:24:05.999 --rc genhtml_legend=1 00:24:05.999 --rc geninfo_all_blocks=1 00:24:05.999 --rc geninfo_unexecuted_blocks=1 00:24:05.999 00:24:05.999 ' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.000 05:18:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.123 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:14.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:14.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:14.124 Found net devices under 0000:af:00.0: cvl_0_0 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:14.124 Found net devices under 0000:af:00.1: cvl_0_1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:14.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:24:14.124 00:24:14.124 --- 10.0.0.2 ping statistics --- 00:24:14.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.124 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:24:14.124 00:24:14.124 --- 10.0.0.1 ping statistics --- 00:24:14.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.124 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=563879 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 563879 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 563879 ']' 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.124 05:18:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.124 [2024-12-09 05:18:55.654892] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:14.124 [2024-12-09 05:18:55.654936] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.124 [2024-12-09 05:18:55.751518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:14.124 [2024-12-09 05:18:55.794893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.124 [2024-12-09 05:18:55.794930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.124 [2024-12-09 05:18:55.794940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.124 [2024-12-09 05:18:55.794948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.124 [2024-12-09 05:18:55.794955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.124 [2024-12-09 05:18:55.796571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.124 [2024-12-09 05:18:55.796679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.124 [2024-12-09 05:18:55.796681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.124 [2024-12-09 05:18:56.540781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.124 Malloc0 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.124 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 [2024-12-09 05:18:56.605481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 [2024-12-09 05:18:56.613410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 Malloc1 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=564122 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 564122 /var/tmp/bdevperf.sock 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 564122 ']' 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.383 05:18:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.330 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.330 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:15.330 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:15.330 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.330 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 NVMe0n1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.641 1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 request: 00:24:15.641 { 00:24:15.641 "name": "NVMe0", 00:24:15.641 "trtype": "tcp", 00:24:15.641 "traddr": "10.0.0.2", 00:24:15.641 "adrfam": "ipv4", 00:24:15.641 "trsvcid": "4420", 00:24:15.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.641 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:15.641 "hostaddr": "10.0.0.1", 00:24:15.641 "prchk_reftag": false, 00:24:15.641 "prchk_guard": false, 00:24:15.641 "hdgst": false, 00:24:15.641 "ddgst": false, 00:24:15.641 "allow_unrecognized_csi": false, 00:24:15.641 "method": "bdev_nvme_attach_controller", 00:24:15.641 "req_id": 1 00:24:15.641 } 00:24:15.641 Got JSON-RPC error response 00:24:15.641 response: 00:24:15.641 { 00:24:15.641 "code": -114, 00:24:15.641 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:15.641 } 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 request: 00:24:15.641 { 00:24:15.641 "name": "NVMe0", 00:24:15.641 "trtype": "tcp", 00:24:15.641 "traddr": "10.0.0.2", 00:24:15.641 "adrfam": "ipv4", 00:24:15.641 "trsvcid": "4420", 00:24:15.641 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.641 "hostaddr": "10.0.0.1", 00:24:15.641 "prchk_reftag": false, 00:24:15.641 "prchk_guard": false, 00:24:15.641 "hdgst": false, 00:24:15.641 "ddgst": false, 00:24:15.641 "allow_unrecognized_csi": false, 00:24:15.641 "method": "bdev_nvme_attach_controller", 00:24:15.641 "req_id": 1 00:24:15.641 } 00:24:15.641 Got JSON-RPC error response 00:24:15.641 response: 00:24:15.641 { 00:24:15.641 "code": -114, 00:24:15.641 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:15.641 } 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 request: 00:24:15.641 { 00:24:15.641 "name": "NVMe0", 00:24:15.641 "trtype": "tcp", 00:24:15.641 "traddr": "10.0.0.2", 00:24:15.641 "adrfam": "ipv4", 00:24:15.641 "trsvcid": "4420", 00:24:15.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.641 "hostaddr": "10.0.0.1", 00:24:15.641 "prchk_reftag": false, 00:24:15.641 "prchk_guard": false, 00:24:15.641 "hdgst": false, 00:24:15.641 "ddgst": false, 00:24:15.641 "multipath": "disable", 00:24:15.641 "allow_unrecognized_csi": false, 00:24:15.641 "method": "bdev_nvme_attach_controller", 00:24:15.641 "req_id": 1 00:24:15.641 } 00:24:15.641 Got JSON-RPC error response 00:24:15.641 response: 00:24:15.641 { 00:24:15.641 "code": -114, 00:24:15.641 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:15.641 } 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 request: 00:24:15.641 { 00:24:15.641 "name": "NVMe0", 00:24:15.641 "trtype": "tcp", 00:24:15.641 "traddr": "10.0.0.2", 00:24:15.641 "adrfam": "ipv4", 00:24:15.641 "trsvcid": "4420", 00:24:15.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.641 "hostaddr": "10.0.0.1", 00:24:15.641 "prchk_reftag": false, 00:24:15.641 "prchk_guard": false, 00:24:15.641 "hdgst": false, 00:24:15.641 "ddgst": false, 00:24:15.641 "multipath": "failover", 00:24:15.641 "allow_unrecognized_csi": false, 00:24:15.641 "method": "bdev_nvme_attach_controller", 00:24:15.641 "req_id": 1 00:24:15.641 } 00:24:15.641 Got JSON-RPC error response 00:24:15.641 response: 00:24:15.641 { 00:24:15.641 "code": -114, 00:24:15.641 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:15.641 } 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.641 05:18:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.932 NVMe0n1 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.932 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:15.932 05:18:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.043 { 00:24:17.043 "results": [ 00:24:17.043 { 00:24:17.043 "job": "NVMe0n1", 00:24:17.043 "core_mask": "0x1", 00:24:17.043 "workload": "write", 00:24:17.043 "status": "finished", 00:24:17.043 "queue_depth": 128, 00:24:17.043 "io_size": 4096, 00:24:17.043 "runtime": 1.003258, 00:24:17.043 "iops": 25449.086874961376, 00:24:17.043 "mibps": 99.41049560531788, 00:24:17.043 "io_failed": 0, 00:24:17.043 "io_timeout": 0, 00:24:17.043 "avg_latency_us": 5023.159343004857, 00:24:17.043 "min_latency_us": 3067.0848, 00:24:17.043 "max_latency_us": 12425.6256 00:24:17.043 } 00:24:17.043 ], 00:24:17.043 "core_count": 1 00:24:17.043 } 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 564122 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 564122 ']' 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 564122 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.043 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 564122 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 564122' 00:24:17.365 killing process with pid 564122 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 564122 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 564122 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:17.365 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:17.365 [2024-12-09 05:18:56.721484] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:17.365 [2024-12-09 05:18:56.721534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564122 ] 00:24:17.365 [2024-12-09 05:18:56.814292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.365 [2024-12-09 05:18:56.853558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.365 [2024-12-09 05:18:58.272271] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 3cd2de7a-4490-4d1b-8dc7-f5411d058bca already exists 00:24:17.365 [2024-12-09 05:18:58.272299] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:3cd2de7a-4490-4d1b-8dc7-f5411d058bca alias for bdev NVMe1n1 00:24:17.365 [2024-12-09 05:18:58.272309] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:17.365 Running I/O for 1 seconds... 00:24:17.365 25404.00 IOPS, 99.23 MiB/s 00:24:17.365 Latency(us) 00:24:17.365 [2024-12-09T04:18:59.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.365 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:17.365 NVMe0n1 : 1.00 25449.09 99.41 0.00 0.00 5023.16 3067.08 12425.63 00:24:17.365 [2024-12-09T04:18:59.835Z] =================================================================================================================== 00:24:17.365 [2024-12-09T04:18:59.835Z] Total : 25449.09 99.41 0.00 0.00 5023.16 3067.08 12425.63 00:24:17.365 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.365 00:24:17.365 Latency(us) 00:24:17.365 [2024-12-09T04:18:59.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.365 [2024-12-09T04:18:59.835Z] =================================================================================================================== 00:24:17.365 [2024-12-09T04:18:59.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.365 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.365 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.365 rmmod nvme_tcp 00:24:17.365 rmmod nvme_fabrics 00:24:17.365 rmmod nvme_keyring 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 563879 ']' 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 563879 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 563879 ']' 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 563879 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 563879 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 563879' 00:24:17.624 killing process with pid 563879 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 563879 00:24:17.624 05:18:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 563879 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.884 05:19:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.795 00:24:19.795 real 0m14.127s 00:24:19.795 user 0m18.296s 00:24:19.795 sys 0m6.526s 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:19.795 ************************************ 00:24:19.795 END TEST nvmf_multicontroller 00:24:19.795 ************************************ 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.795 05:19:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.056 ************************************ 00:24:20.056 START TEST nvmf_aer 00:24:20.056 ************************************ 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:20.056 * Looking for test storage... 00:24:20.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.056 --rc genhtml_branch_coverage=1 00:24:20.056 --rc genhtml_function_coverage=1 00:24:20.056 --rc genhtml_legend=1 00:24:20.056 --rc geninfo_all_blocks=1 00:24:20.056 --rc geninfo_unexecuted_blocks=1 00:24:20.056 00:24:20.056 ' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.056 --rc genhtml_branch_coverage=1 00:24:20.056 --rc genhtml_function_coverage=1 00:24:20.056 --rc genhtml_legend=1 00:24:20.056 --rc geninfo_all_blocks=1 00:24:20.056 --rc geninfo_unexecuted_blocks=1 00:24:20.056 00:24:20.056 ' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.056 --rc genhtml_branch_coverage=1 00:24:20.056 --rc genhtml_function_coverage=1 00:24:20.056 --rc genhtml_legend=1 00:24:20.056 --rc geninfo_all_blocks=1 00:24:20.056 --rc geninfo_unexecuted_blocks=1 00:24:20.056 00:24:20.056 ' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.056 --rc genhtml_branch_coverage=1 00:24:20.056 --rc genhtml_function_coverage=1 00:24:20.056 --rc genhtml_legend=1 00:24:20.056 --rc geninfo_all_blocks=1 00:24:20.056 --rc geninfo_unexecuted_blocks=1 00:24:20.056 00:24:20.056 ' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.056 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.317 05:19:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:28.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.479 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:28.480 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:28.480 Found net devices under 0000:af:00.0: cvl_0_0 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:28.480 Found net devices under 0000:af:00.1: cvl_0_1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:24:28.480 00:24:28.480 --- 10.0.0.2 ping statistics --- 00:24:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.480 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:28.480 00:24:28.480 --- 10.0.0.1 ping statistics --- 00:24:28.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.480 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=568379 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 568379 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 568379 ']' 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.480 05:19:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 [2024-12-09 05:19:09.857915] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:28.481 [2024-12-09 05:19:09.857964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.481 [2024-12-09 05:19:09.955795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.481 [2024-12-09 05:19:09.999653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.481 [2024-12-09 05:19:09.999687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.481 [2024-12-09 05:19:09.999697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.481 [2024-12-09 05:19:09.999705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.481 [2024-12-09 05:19:09.999712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.481 [2024-12-09 05:19:10.001453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.481 [2024-12-09 05:19:10.001488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.481 [2024-12-09 05:19:10.001527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.481 [2024-12-09 05:19:10.001530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 [2024-12-09 05:19:10.742401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 Malloc0 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 [2024-12-09 05:19:10.806625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.481 [ 00:24:28.481 { 00:24:28.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:28.481 "subtype": "Discovery", 00:24:28.481 "listen_addresses": [], 00:24:28.481 "allow_any_host": true, 00:24:28.481 "hosts": [] 00:24:28.481 }, 00:24:28.481 { 00:24:28.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.481 "subtype": "NVMe", 00:24:28.481 "listen_addresses": [ 00:24:28.481 { 00:24:28.481 "trtype": "TCP", 00:24:28.481 "adrfam": "IPv4", 00:24:28.481 "traddr": "10.0.0.2", 00:24:28.481 "trsvcid": "4420" 00:24:28.481 } 00:24:28.481 ], 00:24:28.481 "allow_any_host": true, 00:24:28.481 "hosts": [], 00:24:28.481 "serial_number": "SPDK00000000000001", 00:24:28.481 "model_number": "SPDK bdev Controller", 00:24:28.481 "max_namespaces": 2, 00:24:28.481 "min_cntlid": 1, 00:24:28.481 "max_cntlid": 65519, 00:24:28.481 "namespaces": [ 00:24:28.481 { 00:24:28.481 "nsid": 1, 00:24:28.481 "bdev_name": "Malloc0", 00:24:28.481 "name": "Malloc0", 00:24:28.481 "nguid": "6C14E4FDFF57439AA01F3B3FC5E4B4FD", 00:24:28.481 "uuid": "6c14e4fd-ff57-439a-a01f-3b3fc5e4b4fd" 00:24:28.481 } 00:24:28.481 ] 00:24:28.481 } 00:24:28.481 ] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=568662 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:28.481 05:19:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 Malloc1 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.742 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 Asynchronous Event Request test 00:24:28.742 Attaching to 10.0.0.2 00:24:28.743 Attached to 10.0.0.2 00:24:28.743 Registering asynchronous event callbacks... 00:24:28.743 Starting namespace attribute notice tests for all controllers... 00:24:28.743 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:28.743 aer_cb - Changed Namespace 00:24:28.743 Cleaning up... 00:24:28.743 [ 00:24:28.743 { 00:24:28.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:28.743 "subtype": "Discovery", 00:24:28.743 "listen_addresses": [], 00:24:28.743 "allow_any_host": true, 00:24:28.743 "hosts": [] 00:24:28.743 }, 00:24:28.743 { 00:24:28.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.743 "subtype": "NVMe", 00:24:28.743 "listen_addresses": [ 00:24:28.743 { 00:24:28.743 "trtype": "TCP", 00:24:28.743 "adrfam": "IPv4", 00:24:28.743 "traddr": "10.0.0.2", 00:24:28.743 "trsvcid": "4420" 00:24:28.743 } 00:24:28.743 ], 00:24:28.743 "allow_any_host": true, 00:24:28.743 "hosts": [], 00:24:28.743 "serial_number": "SPDK00000000000001", 00:24:28.743 "model_number": "SPDK bdev Controller", 00:24:28.743 "max_namespaces": 2, 00:24:28.743 "min_cntlid": 1, 00:24:28.743 "max_cntlid": 65519, 00:24:28.743 "namespaces": [ 00:24:28.743 { 00:24:28.743 "nsid": 1, 00:24:28.743 "bdev_name": "Malloc0", 00:24:28.743 "name": "Malloc0", 00:24:28.743 "nguid": "6C14E4FDFF57439AA01F3B3FC5E4B4FD", 00:24:28.743 "uuid": "6c14e4fd-ff57-439a-a01f-3b3fc5e4b4fd" 00:24:28.743 }, 00:24:28.743 { 00:24:28.743 "nsid": 2, 00:24:28.743 "bdev_name": "Malloc1", 00:24:28.743 "name": "Malloc1", 00:24:28.743 "nguid": "FD177399B943452C839CC26DC5CF0151", 00:24:28.743 "uuid": "fd177399-b943-452c-839c-c26dc5cf0151" 00:24:28.743 } 00:24:28.743 ] 00:24:28.743 } 00:24:28.743 ] 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 568662 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.743 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.743 rmmod nvme_tcp 00:24:28.743 rmmod nvme_fabrics 00:24:29.003 rmmod nvme_keyring 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 568379 ']' 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 568379 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 568379 ']' 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 568379 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 568379 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 568379' 00:24:29.003 killing process with pid 568379 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 568379 00:24:29.003 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 568379 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.264 05:19:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.174 05:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.174 00:24:31.174 real 0m11.304s 00:24:31.174 user 0m8.130s 00:24:31.174 sys 0m6.084s 00:24:31.174 05:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.174 05:19:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:31.174 ************************************ 00:24:31.174 END TEST nvmf_aer 00:24:31.174 ************************************ 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.434 ************************************ 00:24:31.434 START TEST nvmf_async_init 00:24:31.434 ************************************ 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:31.434 * Looking for test storage... 00:24:31.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.434 --rc genhtml_branch_coverage=1 00:24:31.434 --rc genhtml_function_coverage=1 00:24:31.434 --rc genhtml_legend=1 00:24:31.434 --rc geninfo_all_blocks=1 00:24:31.434 --rc geninfo_unexecuted_blocks=1 00:24:31.434 00:24:31.434 ' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.434 --rc genhtml_branch_coverage=1 00:24:31.434 --rc genhtml_function_coverage=1 00:24:31.434 --rc genhtml_legend=1 00:24:31.434 --rc geninfo_all_blocks=1 00:24:31.434 --rc geninfo_unexecuted_blocks=1 00:24:31.434 00:24:31.434 ' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.434 --rc genhtml_branch_coverage=1 00:24:31.434 --rc genhtml_function_coverage=1 00:24:31.434 --rc genhtml_legend=1 00:24:31.434 --rc geninfo_all_blocks=1 00:24:31.434 --rc geninfo_unexecuted_blocks=1 00:24:31.434 00:24:31.434 ' 00:24:31.434 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.434 --rc genhtml_branch_coverage=1 00:24:31.434 --rc genhtml_function_coverage=1 00:24:31.434 --rc genhtml_legend=1 00:24:31.434 --rc geninfo_all_blocks=1 00:24:31.434 --rc geninfo_unexecuted_blocks=1 00:24:31.434 00:24:31.434 ' 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.435 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.715 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.715 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:31.715 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:31.715 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=87a2e509adcc4a19bbbb6118bf2dcb9e 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.716 05:19:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.839 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:39.840 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:39.840 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:39.840 Found net devices under 0000:af:00.0: cvl_0_0 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:39.840 Found net devices under 0000:af:00.1: cvl_0_1 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.840 05:19:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.840 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:24:39.841 00:24:39.841 --- 10.0.0.2 ping statistics --- 00:24:39.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.841 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:39.841 00:24:39.841 --- 10.0.0.1 ping statistics --- 00:24:39.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.841 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=572365 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 572365 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 572365 ']' 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.841 05:19:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 [2024-12-09 05:19:21.267961] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:39.841 [2024-12-09 05:19:21.268010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.841 [2024-12-09 05:19:21.366981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.841 [2024-12-09 05:19:21.409652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.841 [2024-12-09 05:19:21.409685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.841 [2024-12-09 05:19:21.409695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.841 [2024-12-09 05:19:21.409704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.841 [2024-12-09 05:19:21.409712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.841 [2024-12-09 05:19:21.410177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 [2024-12-09 05:19:22.153187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 null0 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 87a2e509adcc4a19bbbb6118bf2dcb9e 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.841 [2024-12-09 05:19:22.205440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.841 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.101 nvme0n1 00:24:40.101 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.101 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.102 [ 00:24:40.102 { 00:24:40.102 "name": "nvme0n1", 00:24:40.102 "aliases": [ 00:24:40.102 "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e" 00:24:40.102 ], 00:24:40.102 "product_name": "NVMe disk", 00:24:40.102 "block_size": 512, 00:24:40.102 "num_blocks": 2097152, 00:24:40.102 "uuid": "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e", 00:24:40.102 "numa_id": 1, 00:24:40.102 "assigned_rate_limits": { 00:24:40.102 "rw_ios_per_sec": 0, 00:24:40.102 "rw_mbytes_per_sec": 0, 00:24:40.102 "r_mbytes_per_sec": 0, 00:24:40.102 "w_mbytes_per_sec": 0 00:24:40.102 }, 00:24:40.102 "claimed": false, 00:24:40.102 "zoned": false, 00:24:40.102 "supported_io_types": { 00:24:40.102 "read": true, 00:24:40.102 "write": true, 00:24:40.102 "unmap": false, 00:24:40.102 "flush": true, 00:24:40.102 "reset": true, 00:24:40.102 "nvme_admin": true, 00:24:40.102 "nvme_io": true, 00:24:40.102 "nvme_io_md": false, 00:24:40.102 "write_zeroes": true, 00:24:40.102 "zcopy": false, 00:24:40.102 "get_zone_info": false, 00:24:40.102 "zone_management": false, 00:24:40.102 "zone_append": false, 00:24:40.102 "compare": true, 00:24:40.102 "compare_and_write": true, 00:24:40.102 "abort": true, 00:24:40.102 "seek_hole": false, 00:24:40.102 "seek_data": false, 00:24:40.102 "copy": true, 00:24:40.102 "nvme_iov_md": false 00:24:40.102 }, 00:24:40.102 "memory_domains": [ 00:24:40.102 { 00:24:40.102 "dma_device_id": "system", 00:24:40.102 "dma_device_type": 1 00:24:40.102 } 00:24:40.102 ], 00:24:40.102 "driver_specific": { 00:24:40.102 "nvme": [ 00:24:40.102 { 00:24:40.102 "trid": { 00:24:40.102 "trtype": "TCP", 00:24:40.102 "adrfam": "IPv4", 00:24:40.102 "traddr": "10.0.0.2", 00:24:40.102 "trsvcid": "4420", 00:24:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:40.102 }, 00:24:40.102 "ctrlr_data": { 00:24:40.102 "cntlid": 1, 00:24:40.102 "vendor_id": "0x8086", 00:24:40.102 "model_number": "SPDK bdev Controller", 00:24:40.102 "serial_number": "00000000000000000000", 00:24:40.102 "firmware_revision": "25.01", 00:24:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.102 "oacs": { 00:24:40.102 "security": 0, 00:24:40.102 "format": 0, 00:24:40.102 "firmware": 0, 00:24:40.102 "ns_manage": 0 00:24:40.102 }, 00:24:40.102 "multi_ctrlr": true, 00:24:40.102 "ana_reporting": false 00:24:40.102 }, 00:24:40.102 "vs": { 00:24:40.102 "nvme_version": "1.3" 00:24:40.102 }, 00:24:40.102 "ns_data": { 00:24:40.102 "id": 1, 00:24:40.102 "can_share": true 00:24:40.102 } 00:24:40.102 } 00:24:40.102 ], 00:24:40.102 "mp_policy": "active_passive" 00:24:40.102 } 00:24:40.102 } 00:24:40.102 ] 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.102 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.102 [2024-12-09 05:19:22.470079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:40.102 [2024-12-09 05:19:22.470140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63fa0 (9): Bad file descriptor 00:24:40.362 [2024-12-09 05:19:22.602289] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.362 [ 00:24:40.362 { 00:24:40.362 "name": "nvme0n1", 00:24:40.362 "aliases": [ 00:24:40.362 "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e" 00:24:40.362 ], 00:24:40.362 "product_name": "NVMe disk", 00:24:40.362 "block_size": 512, 00:24:40.362 "num_blocks": 2097152, 00:24:40.362 "uuid": "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e", 00:24:40.362 "numa_id": 1, 00:24:40.362 "assigned_rate_limits": { 00:24:40.362 "rw_ios_per_sec": 0, 00:24:40.362 "rw_mbytes_per_sec": 0, 00:24:40.362 "r_mbytes_per_sec": 0, 00:24:40.362 "w_mbytes_per_sec": 0 00:24:40.362 }, 00:24:40.362 "claimed": false, 00:24:40.362 "zoned": false, 00:24:40.362 "supported_io_types": { 00:24:40.362 "read": true, 00:24:40.362 "write": true, 00:24:40.362 "unmap": false, 00:24:40.362 "flush": true, 00:24:40.362 "reset": true, 00:24:40.362 "nvme_admin": true, 00:24:40.362 "nvme_io": true, 00:24:40.362 "nvme_io_md": false, 00:24:40.362 "write_zeroes": true, 00:24:40.362 "zcopy": false, 00:24:40.362 "get_zone_info": false, 00:24:40.362 "zone_management": false, 00:24:40.362 "zone_append": false, 00:24:40.362 "compare": true, 00:24:40.362 "compare_and_write": true, 00:24:40.362 "abort": true, 00:24:40.362 "seek_hole": false, 00:24:40.362 "seek_data": false, 00:24:40.362 "copy": true, 00:24:40.362 "nvme_iov_md": false 00:24:40.362 }, 00:24:40.362 "memory_domains": [ 00:24:40.362 { 00:24:40.362 "dma_device_id": "system", 00:24:40.362 "dma_device_type": 1 00:24:40.362 } 00:24:40.362 ], 00:24:40.362 "driver_specific": { 00:24:40.362 "nvme": [ 00:24:40.362 { 00:24:40.362 "trid": { 00:24:40.362 "trtype": "TCP", 00:24:40.362 "adrfam": "IPv4", 00:24:40.362 "traddr": "10.0.0.2", 00:24:40.362 "trsvcid": "4420", 00:24:40.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:40.362 }, 00:24:40.362 "ctrlr_data": { 00:24:40.362 "cntlid": 2, 00:24:40.362 "vendor_id": "0x8086", 00:24:40.362 "model_number": "SPDK bdev Controller", 00:24:40.362 "serial_number": "00000000000000000000", 00:24:40.362 "firmware_revision": "25.01", 00:24:40.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.362 "oacs": { 00:24:40.362 "security": 0, 00:24:40.362 "format": 0, 00:24:40.362 "firmware": 0, 00:24:40.362 "ns_manage": 0 00:24:40.362 }, 00:24:40.362 "multi_ctrlr": true, 00:24:40.362 "ana_reporting": false 00:24:40.362 }, 00:24:40.362 "vs": { 00:24:40.362 "nvme_version": "1.3" 00:24:40.362 }, 00:24:40.362 "ns_data": { 00:24:40.362 "id": 1, 00:24:40.362 "can_share": true 00:24:40.362 } 00:24:40.362 } 00:24:40.362 ], 00:24:40.362 "mp_policy": "active_passive" 00:24:40.362 } 00:24:40.362 } 00:24:40.362 ] 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XuscICpoYN 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XuscICpoYN 00:24:40.362 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.XuscICpoYN 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 [2024-12-09 05:19:22.686726] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:40.363 [2024-12-09 05:19:22.686847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 [2024-12-09 05:19:22.706791] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.363 nvme0n1 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 [ 00:24:40.363 { 00:24:40.363 "name": "nvme0n1", 00:24:40.363 "aliases": [ 00:24:40.363 "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e" 00:24:40.363 ], 00:24:40.363 "product_name": "NVMe disk", 00:24:40.363 "block_size": 512, 00:24:40.363 "num_blocks": 2097152, 00:24:40.363 "uuid": "87a2e509-adcc-4a19-bbbb-6118bf2dcb9e", 00:24:40.363 "numa_id": 1, 00:24:40.363 "assigned_rate_limits": { 00:24:40.363 "rw_ios_per_sec": 0, 00:24:40.363 "rw_mbytes_per_sec": 0, 00:24:40.363 "r_mbytes_per_sec": 0, 00:24:40.363 "w_mbytes_per_sec": 0 00:24:40.363 }, 00:24:40.363 "claimed": false, 00:24:40.363 "zoned": false, 00:24:40.363 "supported_io_types": { 00:24:40.363 "read": true, 00:24:40.363 "write": true, 00:24:40.363 "unmap": false, 00:24:40.363 "flush": true, 00:24:40.363 "reset": true, 00:24:40.363 "nvme_admin": true, 00:24:40.363 "nvme_io": true, 00:24:40.363 "nvme_io_md": false, 00:24:40.363 "write_zeroes": true, 00:24:40.363 "zcopy": false, 00:24:40.363 "get_zone_info": false, 00:24:40.363 "zone_management": false, 00:24:40.363 "zone_append": false, 00:24:40.363 "compare": true, 00:24:40.363 "compare_and_write": true, 00:24:40.363 "abort": true, 00:24:40.363 "seek_hole": false, 00:24:40.363 "seek_data": false, 00:24:40.363 "copy": true, 00:24:40.363 "nvme_iov_md": false 00:24:40.363 }, 00:24:40.363 "memory_domains": [ 00:24:40.363 { 00:24:40.363 "dma_device_id": "system", 00:24:40.363 "dma_device_type": 1 00:24:40.363 } 00:24:40.363 ], 00:24:40.363 "driver_specific": { 00:24:40.363 "nvme": [ 00:24:40.363 { 00:24:40.363 "trid": { 00:24:40.363 "trtype": "TCP", 00:24:40.363 "adrfam": "IPv4", 00:24:40.363 "traddr": "10.0.0.2", 00:24:40.363 "trsvcid": "4421", 00:24:40.363 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:40.363 }, 00:24:40.363 "ctrlr_data": { 00:24:40.363 "cntlid": 3, 00:24:40.363 "vendor_id": "0x8086", 00:24:40.363 "model_number": "SPDK bdev Controller", 00:24:40.363 "serial_number": "00000000000000000000", 00:24:40.363 "firmware_revision": "25.01", 00:24:40.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:40.363 "oacs": { 00:24:40.363 "security": 0, 00:24:40.363 "format": 0, 00:24:40.363 "firmware": 0, 00:24:40.363 "ns_manage": 0 00:24:40.363 }, 00:24:40.363 "multi_ctrlr": true, 00:24:40.363 "ana_reporting": false 00:24:40.363 }, 00:24:40.363 "vs": { 00:24:40.363 "nvme_version": "1.3" 00:24:40.363 }, 00:24:40.363 "ns_data": { 00:24:40.363 "id": 1, 00:24:40.363 "can_share": true 00:24:40.363 } 00:24:40.363 } 00:24:40.363 ], 00:24:40.363 "mp_policy": "active_passive" 00:24:40.363 } 00:24:40.363 } 00:24:40.363 ] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.XuscICpoYN 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.363 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.623 rmmod nvme_tcp 00:24:40.623 rmmod nvme_fabrics 00:24:40.623 rmmod nvme_keyring 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 572365 ']' 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 572365 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 572365 ']' 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 572365 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 572365 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 572365' 00:24:40.623 killing process with pid 572365 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 572365 00:24:40.623 05:19:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 572365 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.882 05:19:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.790 05:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.790 00:24:42.790 real 0m11.542s 00:24:42.790 user 0m4.206s 00:24:42.790 sys 0m6.066s 00:24:42.790 05:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.790 05:19:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:42.790 ************************************ 00:24:42.790 END TEST nvmf_async_init 00:24:42.790 ************************************ 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 ************************************ 00:24:43.048 START TEST dma 00:24:43.048 ************************************ 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:43.048 * Looking for test storage... 00:24:43.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.049 --rc genhtml_branch_coverage=1 00:24:43.049 --rc genhtml_function_coverage=1 00:24:43.049 --rc genhtml_legend=1 00:24:43.049 --rc geninfo_all_blocks=1 00:24:43.049 --rc geninfo_unexecuted_blocks=1 00:24:43.049 00:24:43.049 ' 00:24:43.049 05:19:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.049 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.307 05:19:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:43.308 00:24:43.308 real 0m0.228s 00:24:43.308 user 0m0.129s 00:24:43.308 sys 0m0.117s 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 ************************************ 00:24:43.308 END TEST dma 00:24:43.308 ************************************ 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 ************************************ 00:24:43.308 START TEST nvmf_identify 00:24:43.308 ************************************ 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:43.308 * Looking for test storage... 00:24:43.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.308 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.568 --rc genhtml_branch_coverage=1 00:24:43.568 --rc genhtml_function_coverage=1 00:24:43.568 --rc genhtml_legend=1 00:24:43.568 --rc geninfo_all_blocks=1 00:24:43.568 --rc geninfo_unexecuted_blocks=1 00:24:43.568 00:24:43.568 ' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.568 --rc genhtml_branch_coverage=1 00:24:43.568 --rc genhtml_function_coverage=1 00:24:43.568 --rc genhtml_legend=1 00:24:43.568 --rc geninfo_all_blocks=1 00:24:43.568 --rc geninfo_unexecuted_blocks=1 00:24:43.568 00:24:43.568 ' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.568 --rc genhtml_branch_coverage=1 00:24:43.568 --rc genhtml_function_coverage=1 00:24:43.568 --rc genhtml_legend=1 00:24:43.568 --rc geninfo_all_blocks=1 00:24:43.568 --rc geninfo_unexecuted_blocks=1 00:24:43.568 00:24:43.568 ' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.568 --rc genhtml_branch_coverage=1 00:24:43.568 --rc genhtml_function_coverage=1 00:24:43.568 --rc genhtml_legend=1 00:24:43.568 --rc geninfo_all_blocks=1 00:24:43.568 --rc geninfo_unexecuted_blocks=1 00:24:43.568 00:24:43.568 ' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.568 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.569 05:19:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:51.692 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:51.692 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.692 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:51.693 Found net devices under 0000:af:00.0: cvl_0_0 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:51.693 Found net devices under 0000:af:00.1: cvl_0_1 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.693 05:19:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:24:51.693 00:24:51.693 --- 10.0.0.2 ping statistics --- 00:24:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.693 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:51.693 00:24:51.693 --- 10.0.0.1 ping statistics --- 00:24:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.693 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=576653 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 576653 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 576653 ']' 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.693 05:19:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.693 [2024-12-09 05:19:33.274853] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:51.693 [2024-12-09 05:19:33.274901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.693 [2024-12-09 05:19:33.373744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.693 [2024-12-09 05:19:33.416226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.693 [2024-12-09 05:19:33.416262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.693 [2024-12-09 05:19:33.416272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.693 [2024-12-09 05:19:33.416283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.693 [2024-12-09 05:19:33.416291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.693 [2024-12-09 05:19:33.420226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.693 [2024-12-09 05:19:33.420320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.693 [2024-12-09 05:19:33.420428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.693 [2024-12-09 05:19:33.420429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.693 [2024-12-09 05:19:34.104764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.693 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 Malloc0 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 [2024-12-09 05:19:34.217079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.955 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.955 [ 00:24:51.955 { 00:24:51.955 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:51.955 "subtype": "Discovery", 00:24:51.955 "listen_addresses": [ 00:24:51.955 { 00:24:51.955 "trtype": "TCP", 00:24:51.955 "adrfam": "IPv4", 00:24:51.955 "traddr": "10.0.0.2", 00:24:51.955 "trsvcid": "4420" 00:24:51.955 } 00:24:51.955 ], 00:24:51.955 "allow_any_host": true, 00:24:51.955 "hosts": [] 00:24:51.955 }, 00:24:51.955 { 00:24:51.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.956 "subtype": "NVMe", 00:24:51.956 "listen_addresses": [ 00:24:51.956 { 00:24:51.956 "trtype": "TCP", 00:24:51.956 "adrfam": "IPv4", 00:24:51.956 "traddr": "10.0.0.2", 00:24:51.956 "trsvcid": "4420" 00:24:51.956 } 00:24:51.956 ], 00:24:51.956 "allow_any_host": true, 00:24:51.956 "hosts": [], 00:24:51.956 "serial_number": "SPDK00000000000001", 00:24:51.956 "model_number": "SPDK bdev Controller", 00:24:51.956 "max_namespaces": 32, 00:24:51.956 "min_cntlid": 1, 00:24:51.956 "max_cntlid": 65519, 00:24:51.956 "namespaces": [ 00:24:51.956 { 00:24:51.956 "nsid": 1, 00:24:51.956 "bdev_name": "Malloc0", 00:24:51.956 "name": "Malloc0", 00:24:51.956 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:51.956 "eui64": "ABCDEF0123456789", 00:24:51.956 "uuid": "233b9d09-fd06-4afe-a84d-269ec90f6c1a" 00:24:51.956 } 00:24:51.956 ] 00:24:51.956 } 00:24:51.956 ] 00:24:51.956 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.956 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:51.956 [2024-12-09 05:19:34.275289] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:51.956 [2024-12-09 05:19:34.275327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576694 ] 00:24:51.956 [2024-12-09 05:19:34.324173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:51.956 [2024-12-09 05:19:34.324228] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:51.956 [2024-12-09 05:19:34.324234] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:51.956 [2024-12-09 05:19:34.324251] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:51.956 [2024-12-09 05:19:34.324261] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:51.956 [2024-12-09 05:19:34.324894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:51.956 [2024-12-09 05:19:34.324932] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10a3690 0 00:24:51.956 [2024-12-09 05:19:34.331219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:51.956 [2024-12-09 05:19:34.331234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:51.956 [2024-12-09 05:19:34.331240] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:51.956 [2024-12-09 05:19:34.331244] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:51.956 [2024-12-09 05:19:34.331281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.331287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.331292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.331306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:51.956 [2024-12-09 05:19:34.331325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.339216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.339225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.339230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.339250] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:51.956 [2024-12-09 05:19:34.339258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:51.956 [2024-12-09 05:19:34.339268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:51.956 [2024-12-09 05:19:34.339286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.339303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.956 [2024-12-09 05:19:34.339318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.339488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.339495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.339499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.339513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:51.956 [2024-12-09 05:19:34.339521] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:51.956 [2024-12-09 05:19:34.339529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.339546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.956 [2024-12-09 05:19:34.339558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.339625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.339631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.339636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.339647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:51.956 [2024-12-09 05:19:34.339656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.339663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.339680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.956 [2024-12-09 05:19:34.339691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.339755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.339762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.339766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.339777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.339789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.339806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.956 [2024-12-09 05:19:34.339818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.339881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.339887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.339892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.339897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.339902] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:51.956 [2024-12-09 05:19:34.339908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.339917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.340024] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:51.956 [2024-12-09 05:19:34.340030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.340040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.340044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.340049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.340056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.956 [2024-12-09 05:19:34.340067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.956 [2024-12-09 05:19:34.340154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.956 [2024-12-09 05:19:34.340160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.956 [2024-12-09 05:19:34.340165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.340169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.956 [2024-12-09 05:19:34.340175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:51.956 [2024-12-09 05:19:34.340186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.340191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.956 [2024-12-09 05:19:34.340195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.956 [2024-12-09 05:19:34.340202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.957 [2024-12-09 05:19:34.340218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.957 [2024-12-09 05:19:34.340285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.957 [2024-12-09 05:19:34.340292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.957 [2024-12-09 05:19:34.340296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.340301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.957 [2024-12-09 05:19:34.340311] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:51.957 [2024-12-09 05:19:34.340317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.340326] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:51.957 [2024-12-09 05:19:34.340335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.340345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.340350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.340357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.957 [2024-12-09 05:19:34.340369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.957 [2024-12-09 05:19:34.340461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.957 [2024-12-09 05:19:34.340468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.957 [2024-12-09 05:19:34.340472] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.340477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a3690): datao=0, datal=4096, cccid=0 00:24:51.957 [2024-12-09 05:19:34.340484] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1105100) on tqpair(0x10a3690): expected_datao=0, payload_size=4096 00:24:51.957 [2024-12-09 05:19:34.340490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.340505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.340511] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.957 [2024-12-09 05:19:34.382349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.957 [2024-12-09 05:19:34.382354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.957 [2024-12-09 05:19:34.382369] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:51.957 [2024-12-09 05:19:34.382375] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:51.957 [2024-12-09 05:19:34.382381] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:51.957 [2024-12-09 05:19:34.382387] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:51.957 [2024-12-09 05:19:34.382393] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:51.957 [2024-12-09 05:19:34.382399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.382409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.382418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.957 [2024-12-09 05:19:34.382453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.957 [2024-12-09 05:19:34.382518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.957 [2024-12-09 05:19:34.382525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.957 [2024-12-09 05:19:34.382530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:51.957 [2024-12-09 05:19:34.382542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.957 [2024-12-09 05:19:34.382565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.957 [2024-12-09 05:19:34.382588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.957 [2024-12-09 05:19:34.382610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.957 [2024-12-09 05:19:34.382631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.382644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:51.957 [2024-12-09 05:19:34.382652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.957 [2024-12-09 05:19:34.382677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105100, cid 0, qid 0 00:24:51.957 [2024-12-09 05:19:34.382683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105280, cid 1, qid 0 00:24:51.957 [2024-12-09 05:19:34.382688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105400, cid 2, qid 0 00:24:51.957 [2024-12-09 05:19:34.382694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:51.957 [2024-12-09 05:19:34.382699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105700, cid 4, qid 0 00:24:51.957 [2024-12-09 05:19:34.382792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.957 [2024-12-09 05:19:34.382799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.957 [2024-12-09 05:19:34.382803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105700) on tqpair=0x10a3690 00:24:51.957 [2024-12-09 05:19:34.382816] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:51.957 [2024-12-09 05:19:34.382823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:51.957 [2024-12-09 05:19:34.382834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.382846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.957 [2024-12-09 05:19:34.382858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105700, cid 4, qid 0 00:24:51.957 [2024-12-09 05:19:34.382929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.957 [2024-12-09 05:19:34.382936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.957 [2024-12-09 05:19:34.382940] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a3690): datao=0, datal=4096, cccid=4 00:24:51.957 [2024-12-09 05:19:34.382951] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1105700) on tqpair(0x10a3690): expected_datao=0, payload_size=4096 00:24:51.957 [2024-12-09 05:19:34.382956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382964] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382969] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.957 [2024-12-09 05:19:34.382984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.957 [2024-12-09 05:19:34.382988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.382993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105700) on tqpair=0x10a3690 00:24:51.957 [2024-12-09 05:19:34.383007] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:51.957 [2024-12-09 05:19:34.383033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.383038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.383045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.957 [2024-12-09 05:19:34.383053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.383057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.957 [2024-12-09 05:19:34.383062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10a3690) 00:24:51.957 [2024-12-09 05:19:34.383069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.958 [2024-12-09 05:19:34.383084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105700, cid 4, qid 0 00:24:51.958 [2024-12-09 05:19:34.383091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105880, cid 5, qid 0 00:24:51.958 [2024-12-09 05:19:34.383194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.958 [2024-12-09 05:19:34.383201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.958 [2024-12-09 05:19:34.383205] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.958 [2024-12-09 05:19:34.383215] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a3690): datao=0, datal=1024, cccid=4 00:24:51.958 [2024-12-09 05:19:34.383221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1105700) on tqpair(0x10a3690): expected_datao=0, payload_size=1024 00:24:51.958 [2024-12-09 05:19:34.383229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.958 [2024-12-09 05:19:34.383236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.958 [2024-12-09 05:19:34.383240] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.958 [2024-12-09 05:19:34.383247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.958 [2024-12-09 05:19:34.383253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.958 [2024-12-09 05:19:34.383257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.958 [2024-12-09 05:19:34.383262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105880) on tqpair=0x10a3690 00:24:52.223 [2024-12-09 05:19:34.423366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.224 [2024-12-09 05:19:34.423379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.224 [2024-12-09 05:19:34.423384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105700) on tqpair=0x10a3690 00:24:52.224 [2024-12-09 05:19:34.423402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a3690) 00:24:52.224 [2024-12-09 05:19:34.423416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.224 [2024-12-09 05:19:34.423434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105700, cid 4, qid 0 00:24:52.224 [2024-12-09 05:19:34.423518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.224 [2024-12-09 05:19:34.423525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.224 [2024-12-09 05:19:34.423530] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423534] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a3690): datao=0, datal=3072, cccid=4 00:24:52.224 [2024-12-09 05:19:34.423540] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1105700) on tqpair(0x10a3690): expected_datao=0, payload_size=3072 00:24:52.224 [2024-12-09 05:19:34.423546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423553] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423557] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.224 [2024-12-09 05:19:34.423572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.224 [2024-12-09 05:19:34.423577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105700) on tqpair=0x10a3690 00:24:52.224 [2024-12-09 05:19:34.423591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a3690) 00:24:52.224 [2024-12-09 05:19:34.423602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.224 [2024-12-09 05:19:34.423619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105700, cid 4, qid 0 00:24:52.224 [2024-12-09 05:19:34.423690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.224 [2024-12-09 05:19:34.423696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.224 [2024-12-09 05:19:34.423701] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423705] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a3690): datao=0, datal=8, cccid=4 00:24:52.224 [2024-12-09 05:19:34.423711] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1105700) on tqpair(0x10a3690): expected_datao=0, payload_size=8 00:24:52.224 [2024-12-09 05:19:34.423717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423727] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.423732] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.468962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.224 [2024-12-09 05:19:34.468977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.224 [2024-12-09 05:19:34.468983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.224 [2024-12-09 05:19:34.468988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105700) on tqpair=0x10a3690 00:24:52.224 ===================================================== 00:24:52.224 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:52.224 ===================================================== 00:24:52.224 Controller Capabilities/Features 00:24:52.224 ================================ 00:24:52.224 Vendor ID: 0000 00:24:52.224 Subsystem Vendor ID: 0000 00:24:52.224 Serial Number: .................... 00:24:52.224 Model Number: ........................................ 00:24:52.224 Firmware Version: 25.01 00:24:52.224 Recommended Arb Burst: 0 00:24:52.224 IEEE OUI Identifier: 00 00 00 00:24:52.224 Multi-path I/O 00:24:52.224 May have multiple subsystem ports: No 00:24:52.224 May have multiple controllers: No 00:24:52.224 Associated with SR-IOV VF: No 00:24:52.224 Max Data Transfer Size: 131072 00:24:52.224 Max Number of Namespaces: 0 00:24:52.224 Max Number of I/O Queues: 1024 00:24:52.224 NVMe Specification Version (VS): 1.3 00:24:52.224 NVMe Specification Version (Identify): 1.3 00:24:52.224 Maximum Queue Entries: 128 00:24:52.224 Contiguous Queues Required: Yes 00:24:52.224 Arbitration Mechanisms Supported 00:24:52.224 Weighted Round Robin: Not Supported 00:24:52.224 Vendor Specific: Not Supported 00:24:52.224 Reset Timeout: 15000 ms 00:24:52.224 Doorbell Stride: 4 bytes 00:24:52.224 NVM Subsystem Reset: Not Supported 00:24:52.224 Command Sets Supported 00:24:52.224 NVM Command Set: Supported 00:24:52.224 Boot Partition: Not Supported 00:24:52.224 Memory Page Size Minimum: 4096 bytes 00:24:52.224 Memory Page Size Maximum: 4096 bytes 00:24:52.224 Persistent Memory Region: Not Supported 00:24:52.224 Optional Asynchronous Events Supported 00:24:52.224 Namespace Attribute Notices: Not Supported 00:24:52.224 Firmware Activation Notices: Not Supported 00:24:52.224 ANA Change Notices: Not Supported 00:24:52.224 PLE Aggregate Log Change Notices: Not Supported 00:24:52.224 LBA Status Info Alert Notices: Not Supported 00:24:52.224 EGE Aggregate Log Change Notices: Not Supported 00:24:52.224 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.224 Zone Descriptor Change Notices: Not Supported 00:24:52.224 Discovery Log Change Notices: Supported 00:24:52.224 Controller Attributes 00:24:52.224 128-bit Host Identifier: Not Supported 00:24:52.224 Non-Operational Permissive Mode: Not Supported 00:24:52.224 NVM Sets: Not Supported 00:24:52.224 Read Recovery Levels: Not Supported 00:24:52.224 Endurance Groups: Not Supported 00:24:52.224 Predictable Latency Mode: Not Supported 00:24:52.224 Traffic Based Keep ALive: Not Supported 00:24:52.224 Namespace Granularity: Not Supported 00:24:52.224 SQ Associations: Not Supported 00:24:52.224 UUID List: Not Supported 00:24:52.224 Multi-Domain Subsystem: Not Supported 00:24:52.224 Fixed Capacity Management: Not Supported 00:24:52.224 Variable Capacity Management: Not Supported 00:24:52.224 Delete Endurance Group: Not Supported 00:24:52.224 Delete NVM Set: Not Supported 00:24:52.224 Extended LBA Formats Supported: Not Supported 00:24:52.224 Flexible Data Placement Supported: Not Supported 00:24:52.224 00:24:52.224 Controller Memory Buffer Support 00:24:52.224 ================================ 00:24:52.224 Supported: No 00:24:52.224 00:24:52.224 Persistent Memory Region Support 00:24:52.224 ================================ 00:24:52.224 Supported: No 00:24:52.224 00:24:52.224 Admin Command Set Attributes 00:24:52.224 ============================ 00:24:52.224 Security Send/Receive: Not Supported 00:24:52.224 Format NVM: Not Supported 00:24:52.224 Firmware Activate/Download: Not Supported 00:24:52.224 Namespace Management: Not Supported 00:24:52.224 Device Self-Test: Not Supported 00:24:52.224 Directives: Not Supported 00:24:52.224 NVMe-MI: Not Supported 00:24:52.224 Virtualization Management: Not Supported 00:24:52.224 Doorbell Buffer Config: Not Supported 00:24:52.224 Get LBA Status Capability: Not Supported 00:24:52.224 Command & Feature Lockdown Capability: Not Supported 00:24:52.224 Abort Command Limit: 1 00:24:52.224 Async Event Request Limit: 4 00:24:52.224 Number of Firmware Slots: N/A 00:24:52.224 Firmware Slot 1 Read-Only: N/A 00:24:52.224 Firmware Activation Without Reset: N/A 00:24:52.224 Multiple Update Detection Support: N/A 00:24:52.224 Firmware Update Granularity: No Information Provided 00:24:52.224 Per-Namespace SMART Log: No 00:24:52.224 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.224 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:52.224 Command Effects Log Page: Not Supported 00:24:52.224 Get Log Page Extended Data: Supported 00:24:52.224 Telemetry Log Pages: Not Supported 00:24:52.224 Persistent Event Log Pages: Not Supported 00:24:52.224 Supported Log Pages Log Page: May Support 00:24:52.224 Commands Supported & Effects Log Page: Not Supported 00:24:52.224 Feature Identifiers & Effects Log Page:May Support 00:24:52.224 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.224 Data Area 4 for Telemetry Log: Not Supported 00:24:52.224 Error Log Page Entries Supported: 128 00:24:52.224 Keep Alive: Not Supported 00:24:52.224 00:24:52.224 NVM Command Set Attributes 00:24:52.224 ========================== 00:24:52.224 Submission Queue Entry Size 00:24:52.224 Max: 1 00:24:52.224 Min: 1 00:24:52.224 Completion Queue Entry Size 00:24:52.224 Max: 1 00:24:52.224 Min: 1 00:24:52.224 Number of Namespaces: 0 00:24:52.224 Compare Command: Not Supported 00:24:52.224 Write Uncorrectable Command: Not Supported 00:24:52.224 Dataset Management Command: Not Supported 00:24:52.224 Write Zeroes Command: Not Supported 00:24:52.225 Set Features Save Field: Not Supported 00:24:52.225 Reservations: Not Supported 00:24:52.225 Timestamp: Not Supported 00:24:52.225 Copy: Not Supported 00:24:52.225 Volatile Write Cache: Not Present 00:24:52.225 Atomic Write Unit (Normal): 1 00:24:52.225 Atomic Write Unit (PFail): 1 00:24:52.225 Atomic Compare & Write Unit: 1 00:24:52.225 Fused Compare & Write: Supported 00:24:52.225 Scatter-Gather List 00:24:52.225 SGL Command Set: Supported 00:24:52.225 SGL Keyed: Supported 00:24:52.225 SGL Bit Bucket Descriptor: Not Supported 00:24:52.225 SGL Metadata Pointer: Not Supported 00:24:52.225 Oversized SGL: Not Supported 00:24:52.225 SGL Metadata Address: Not Supported 00:24:52.225 SGL Offset: Supported 00:24:52.225 Transport SGL Data Block: Not Supported 00:24:52.225 Replay Protected Memory Block: Not Supported 00:24:52.225 00:24:52.225 Firmware Slot Information 00:24:52.225 ========================= 00:24:52.225 Active slot: 0 00:24:52.225 00:24:52.225 00:24:52.225 Error Log 00:24:52.225 ========= 00:24:52.225 00:24:52.225 Active Namespaces 00:24:52.225 ================= 00:24:52.225 Discovery Log Page 00:24:52.225 ================== 00:24:52.225 Generation Counter: 2 00:24:52.225 Number of Records: 2 00:24:52.225 Record Format: 0 00:24:52.225 00:24:52.225 Discovery Log Entry 0 00:24:52.225 ---------------------- 00:24:52.225 Transport Type: 3 (TCP) 00:24:52.225 Address Family: 1 (IPv4) 00:24:52.225 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:52.225 Entry Flags: 00:24:52.225 Duplicate Returned Information: 1 00:24:52.225 Explicit Persistent Connection Support for Discovery: 1 00:24:52.225 Transport Requirements: 00:24:52.225 Secure Channel: Not Required 00:24:52.225 Port ID: 0 (0x0000) 00:24:52.225 Controller ID: 65535 (0xffff) 00:24:52.225 Admin Max SQ Size: 128 00:24:52.225 Transport Service Identifier: 4420 00:24:52.225 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:52.225 Transport Address: 10.0.0.2 00:24:52.225 Discovery Log Entry 1 00:24:52.225 ---------------------- 00:24:52.225 Transport Type: 3 (TCP) 00:24:52.225 Address Family: 1 (IPv4) 00:24:52.225 Subsystem Type: 2 (NVM Subsystem) 00:24:52.225 Entry Flags: 00:24:52.225 Duplicate Returned Information: 0 00:24:52.225 Explicit Persistent Connection Support for Discovery: 0 00:24:52.225 Transport Requirements: 00:24:52.225 Secure Channel: Not Required 00:24:52.225 Port ID: 0 (0x0000) 00:24:52.225 Controller ID: 65535 (0xffff) 00:24:52.225 Admin Max SQ Size: 128 00:24:52.225 Transport Service Identifier: 4420 00:24:52.225 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:52.225 Transport Address: 10.0.0.2 [2024-12-09 05:19:34.469082] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:52.225 [2024-12-09 05:19:34.469095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105100) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.225 [2024-12-09 05:19:34.469110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105280) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.225 [2024-12-09 05:19:34.469122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105400) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.225 [2024-12-09 05:19:34.469133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.225 [2024-12-09 05:19:34.469148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469444] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:52.225 [2024-12-09 05:19:34.469450] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:52.225 [2024-12-09 05:19:34.469463] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.225 [2024-12-09 05:19:34.469844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.225 [2024-12-09 05:19:34.469856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.225 [2024-12-09 05:19:34.469920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.225 [2024-12-09 05:19:34.469927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.225 [2024-12-09 05:19:34.469931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.225 [2024-12-09 05:19:34.469947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.225 [2024-12-09 05:19:34.469957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.226 [2024-12-09 05:19:34.469964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.469975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.226 [2024-12-09 05:19:34.470040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.470047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.470051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.226 [2024-12-09 05:19:34.470065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.226 [2024-12-09 05:19:34.470082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.470093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.226 [2024-12-09 05:19:34.470155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.470162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.470167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.226 [2024-12-09 05:19:34.470181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.470190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.226 [2024-12-09 05:19:34.470197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.470214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.226 [2024-12-09 05:19:34.474216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.474226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.474230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.474235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.226 [2024-12-09 05:19:34.474246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.474251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.474256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a3690) 00:24:52.226 [2024-12-09 05:19:34.474263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.474276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1105580, cid 3, qid 0 00:24:52.226 [2024-12-09 05:19:34.474419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.474426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.474430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.474435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1105580) on tqpair=0x10a3690 00:24:52.226 [2024-12-09 05:19:34.474443] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:24:52.226 00:24:52.226 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:52.226 [2024-12-09 05:19:34.592217] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:24:52.226 [2024-12-09 05:19:34.592264] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576832 ] 00:24:52.226 [2024-12-09 05:19:34.640827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:52.226 [2024-12-09 05:19:34.640872] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:52.226 [2024-12-09 05:19:34.640878] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:52.226 [2024-12-09 05:19:34.640892] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:52.226 [2024-12-09 05:19:34.640901] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:52.226 [2024-12-09 05:19:34.641280] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:52.226 [2024-12-09 05:19:34.641312] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xeac690 0 00:24:52.226 [2024-12-09 05:19:34.647219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:52.226 [2024-12-09 05:19:34.647232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:52.226 [2024-12-09 05:19:34.647237] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:52.226 [2024-12-09 05:19:34.647242] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:52.226 [2024-12-09 05:19:34.647272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.647278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.647283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.226 [2024-12-09 05:19:34.647294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:52.226 [2024-12-09 05:19:34.647312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.226 [2024-12-09 05:19:34.654216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.654226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.654231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.226 [2024-12-09 05:19:34.654249] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:52.226 [2024-12-09 05:19:34.654257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:52.226 [2024-12-09 05:19:34.654263] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:52.226 [2024-12-09 05:19:34.654278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.226 [2024-12-09 05:19:34.654295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.654314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.226 [2024-12-09 05:19:34.654404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.654411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.654416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.226 [2024-12-09 05:19:34.654428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:52.226 [2024-12-09 05:19:34.654437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:52.226 [2024-12-09 05:19:34.654445] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.226 [2024-12-09 05:19:34.654461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.654474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.226 [2024-12-09 05:19:34.654553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.654559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.654564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.226 [2024-12-09 05:19:34.654574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:52.226 [2024-12-09 05:19:34.654584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:52.226 [2024-12-09 05:19:34.654592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.226 [2024-12-09 05:19:34.654608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.654620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.226 [2024-12-09 05:19:34.654695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.654701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.654706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.226 [2024-12-09 05:19:34.654716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:52.226 [2024-12-09 05:19:34.654727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.226 [2024-12-09 05:19:34.654736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.226 [2024-12-09 05:19:34.654743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.226 [2024-12-09 05:19:34.654755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.226 [2024-12-09 05:19:34.654815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.226 [2024-12-09 05:19:34.654822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.226 [2024-12-09 05:19:34.654828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.654833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.654838] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:52.227 [2024-12-09 05:19:34.654844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:52.227 [2024-12-09 05:19:34.654852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:52.227 [2024-12-09 05:19:34.654959] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:52.227 [2024-12-09 05:19:34.654965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:52.227 [2024-12-09 05:19:34.654973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.654978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.654982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.654989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.227 [2024-12-09 05:19:34.655001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.227 [2024-12-09 05:19:34.655066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.227 [2024-12-09 05:19:34.655073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.227 [2024-12-09 05:19:34.655077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.655087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:52.227 [2024-12-09 05:19:34.655097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.227 [2024-12-09 05:19:34.655125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.227 [2024-12-09 05:19:34.655194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.227 [2024-12-09 05:19:34.655200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.227 [2024-12-09 05:19:34.655205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.655219] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:52.227 [2024-12-09 05:19:34.655225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:52.227 [2024-12-09 05:19:34.655245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.227 [2024-12-09 05:19:34.655280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.227 [2024-12-09 05:19:34.655384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.227 [2024-12-09 05:19:34.655391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.227 [2024-12-09 05:19:34.655396] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655400] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=4096, cccid=0 00:24:52.227 [2024-12-09 05:19:34.655406] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e100) on tqpair(0xeac690): expected_datao=0, payload_size=4096 00:24:52.227 [2024-12-09 05:19:34.655412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655419] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.227 [2024-12-09 05:19:34.655446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.227 [2024-12-09 05:19:34.655450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.655463] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:52.227 [2024-12-09 05:19:34.655469] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:52.227 [2024-12-09 05:19:34.655475] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:52.227 [2024-12-09 05:19:34.655480] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:52.227 [2024-12-09 05:19:34.655486] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:52.227 [2024-12-09 05:19:34.655491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.227 [2024-12-09 05:19:34.655537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.227 [2024-12-09 05:19:34.655603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.227 [2024-12-09 05:19:34.655610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.227 [2024-12-09 05:19:34.655614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.655626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.227 [2024-12-09 05:19:34.655650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.227 [2024-12-09 05:19:34.655673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.227 [2024-12-09 05:19:34.655695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.227 [2024-12-09 05:19:34.655716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.227 [2024-12-09 05:19:34.655748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.227 [2024-12-09 05:19:34.655761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e100, cid 0, qid 0 00:24:52.227 [2024-12-09 05:19:34.655767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e280, cid 1, qid 0 00:24:52.227 [2024-12-09 05:19:34.655772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e400, cid 2, qid 0 00:24:52.227 [2024-12-09 05:19:34.655777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.227 [2024-12-09 05:19:34.655783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.227 [2024-12-09 05:19:34.655875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.227 [2024-12-09 05:19:34.655882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.227 [2024-12-09 05:19:34.655886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.227 [2024-12-09 05:19:34.655896] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:52.227 [2024-12-09 05:19:34.655902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:52.227 [2024-12-09 05:19:34.655929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.227 [2024-12-09 05:19:34.655933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.655939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.655946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:52.228 [2024-12-09 05:19:34.655958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.228 [2024-12-09 05:19:34.656030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.228 [2024-12-09 05:19:34.656037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.228 [2024-12-09 05:19:34.656041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.228 [2024-12-09 05:19:34.656098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.656128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.228 [2024-12-09 05:19:34.656140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.228 [2024-12-09 05:19:34.656223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.228 [2024-12-09 05:19:34.656231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.228 [2024-12-09 05:19:34.656235] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656240] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=4096, cccid=4 00:24:52.228 [2024-12-09 05:19:34.656246] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e700) on tqpair(0xeac690): expected_datao=0, payload_size=4096 00:24:52.228 [2024-12-09 05:19:34.656251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.228 [2024-12-09 05:19:34.656278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.228 [2024-12-09 05:19:34.656282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.228 [2024-12-09 05:19:34.656299] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:52.228 [2024-12-09 05:19:34.656309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.656339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.228 [2024-12-09 05:19:34.656352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.228 [2024-12-09 05:19:34.656436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.228 [2024-12-09 05:19:34.656443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.228 [2024-12-09 05:19:34.656449] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=4096, cccid=4 00:24:52.228 [2024-12-09 05:19:34.656459] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e700) on tqpair(0xeac690): expected_datao=0, payload_size=4096 00:24:52.228 [2024-12-09 05:19:34.656465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656486] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.228 [2024-12-09 05:19:34.656530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.228 [2024-12-09 05:19:34.656535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.228 [2024-12-09 05:19:34.656551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.656580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.228 [2024-12-09 05:19:34.656592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.228 [2024-12-09 05:19:34.656670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.228 [2024-12-09 05:19:34.656677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.228 [2024-12-09 05:19:34.656682] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656686] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=4096, cccid=4 00:24:52.228 [2024-12-09 05:19:34.656692] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e700) on tqpair(0xeac690): expected_datao=0, payload_size=4096 00:24:52.228 [2024-12-09 05:19:34.656697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656704] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656709] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.228 [2024-12-09 05:19:34.656727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.228 [2024-12-09 05:19:34.656731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.228 [2024-12-09 05:19:34.656746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656794] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:52.228 [2024-12-09 05:19:34.656799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:52.228 [2024-12-09 05:19:34.656806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:52.228 [2024-12-09 05:19:34.656820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.656832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.228 [2024-12-09 05:19:34.656839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.228 [2024-12-09 05:19:34.656848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeac690) 00:24:52.228 [2024-12-09 05:19:34.656855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.228 [2024-12-09 05:19:34.656869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.228 [2024-12-09 05:19:34.656875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e880, cid 5, qid 0 00:24:52.229 [2024-12-09 05:19:34.656950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.656957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.656961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.656966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.656972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.656979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.656983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.656988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e880) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.656998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e880, cid 5, qid 0 00:24:52.229 [2024-12-09 05:19:34.657089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e880) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e880, cid 5, qid 0 00:24:52.229 [2024-12-09 05:19:34.657202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e880) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e880, cid 5, qid 0 00:24:52.229 [2024-12-09 05:19:34.657322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e880) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeac690) 00:24:52.229 [2024-12-09 05:19:34.657424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.229 [2024-12-09 05:19:34.657437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e880, cid 5, qid 0 00:24:52.229 [2024-12-09 05:19:34.657442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e700, cid 4, qid 0 00:24:52.229 [2024-12-09 05:19:34.657448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0ea00, cid 6, qid 0 00:24:52.229 [2024-12-09 05:19:34.657453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0eb80, cid 7, qid 0 00:24:52.229 [2024-12-09 05:19:34.657608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.229 [2024-12-09 05:19:34.657616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.229 [2024-12-09 05:19:34.657621] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657625] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=8192, cccid=5 00:24:52.229 [2024-12-09 05:19:34.657631] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e880) on tqpair(0xeac690): expected_datao=0, payload_size=8192 00:24:52.229 [2024-12-09 05:19:34.657637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657652] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657656] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.229 [2024-12-09 05:19:34.657672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.229 [2024-12-09 05:19:34.657676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657681] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=512, cccid=4 00:24:52.229 [2024-12-09 05:19:34.657687] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0e700) on tqpair(0xeac690): expected_datao=0, payload_size=512 00:24:52.229 [2024-12-09 05:19:34.657692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657703] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.229 [2024-12-09 05:19:34.657716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.229 [2024-12-09 05:19:34.657720] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657725] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=512, cccid=6 00:24:52.229 [2024-12-09 05:19:34.657730] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0ea00) on tqpair(0xeac690): expected_datao=0, payload_size=512 00:24:52.229 [2024-12-09 05:19:34.657736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:52.229 [2024-12-09 05:19:34.657759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:52.229 [2024-12-09 05:19:34.657763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657768] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeac690): datao=0, datal=4096, cccid=7 00:24:52.229 [2024-12-09 05:19:34.657774] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf0eb80) on tqpair(0xeac690): expected_datao=0, payload_size=4096 00:24:52.229 [2024-12-09 05:19:34.657779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657786] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657791] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e880) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e700) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0ea00) on tqpair=0xeac690 00:24:52.229 [2024-12-09 05:19:34.657876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.229 [2024-12-09 05:19:34.657882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.229 [2024-12-09 05:19:34.657888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.229 [2024-12-09 05:19:34.657893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0eb80) on tqpair=0xeac690 00:24:52.229 ===================================================== 00:24:52.229 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.229 ===================================================== 00:24:52.229 Controller Capabilities/Features 00:24:52.229 ================================ 00:24:52.229 Vendor ID: 8086 00:24:52.229 Subsystem Vendor ID: 8086 00:24:52.229 Serial Number: SPDK00000000000001 00:24:52.229 Model Number: SPDK bdev Controller 00:24:52.229 Firmware Version: 25.01 00:24:52.229 Recommended Arb Burst: 6 00:24:52.229 IEEE OUI Identifier: e4 d2 5c 00:24:52.229 Multi-path I/O 00:24:52.229 May have multiple subsystem ports: Yes 00:24:52.229 May have multiple controllers: Yes 00:24:52.229 Associated with SR-IOV VF: No 00:24:52.230 Max Data Transfer Size: 131072 00:24:52.230 Max Number of Namespaces: 32 00:24:52.230 Max Number of I/O Queues: 127 00:24:52.230 NVMe Specification Version (VS): 1.3 00:24:52.230 NVMe Specification Version (Identify): 1.3 00:24:52.230 Maximum Queue Entries: 128 00:24:52.230 Contiguous Queues Required: Yes 00:24:52.230 Arbitration Mechanisms Supported 00:24:52.230 Weighted Round Robin: Not Supported 00:24:52.230 Vendor Specific: Not Supported 00:24:52.230 Reset Timeout: 15000 ms 00:24:52.230 Doorbell Stride: 4 bytes 00:24:52.230 NVM Subsystem Reset: Not Supported 00:24:52.230 Command Sets Supported 00:24:52.230 NVM Command Set: Supported 00:24:52.230 Boot Partition: Not Supported 00:24:52.230 Memory Page Size Minimum: 4096 bytes 00:24:52.230 Memory Page Size Maximum: 4096 bytes 00:24:52.230 Persistent Memory Region: Not Supported 00:24:52.230 Optional Asynchronous Events Supported 00:24:52.230 Namespace Attribute Notices: Supported 00:24:52.230 Firmware Activation Notices: Not Supported 00:24:52.230 ANA Change Notices: Not Supported 00:24:52.230 PLE Aggregate Log Change Notices: Not Supported 00:24:52.230 LBA Status Info Alert Notices: Not Supported 00:24:52.230 EGE Aggregate Log Change Notices: Not Supported 00:24:52.230 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.230 Zone Descriptor Change Notices: Not Supported 00:24:52.230 Discovery Log Change Notices: Not Supported 00:24:52.230 Controller Attributes 00:24:52.230 128-bit Host Identifier: Supported 00:24:52.230 Non-Operational Permissive Mode: Not Supported 00:24:52.230 NVM Sets: Not Supported 00:24:52.230 Read Recovery Levels: Not Supported 00:24:52.230 Endurance Groups: Not Supported 00:24:52.230 Predictable Latency Mode: Not Supported 00:24:52.230 Traffic Based Keep ALive: Not Supported 00:24:52.230 Namespace Granularity: Not Supported 00:24:52.230 SQ Associations: Not Supported 00:24:52.230 UUID List: Not Supported 00:24:52.230 Multi-Domain Subsystem: Not Supported 00:24:52.230 Fixed Capacity Management: Not Supported 00:24:52.230 Variable Capacity Management: Not Supported 00:24:52.230 Delete Endurance Group: Not Supported 00:24:52.230 Delete NVM Set: Not Supported 00:24:52.230 Extended LBA Formats Supported: Not Supported 00:24:52.230 Flexible Data Placement Supported: Not Supported 00:24:52.230 00:24:52.230 Controller Memory Buffer Support 00:24:52.230 ================================ 00:24:52.230 Supported: No 00:24:52.230 00:24:52.230 Persistent Memory Region Support 00:24:52.230 ================================ 00:24:52.230 Supported: No 00:24:52.230 00:24:52.230 Admin Command Set Attributes 00:24:52.230 ============================ 00:24:52.230 Security Send/Receive: Not Supported 00:24:52.230 Format NVM: Not Supported 00:24:52.230 Firmware Activate/Download: Not Supported 00:24:52.230 Namespace Management: Not Supported 00:24:52.230 Device Self-Test: Not Supported 00:24:52.230 Directives: Not Supported 00:24:52.230 NVMe-MI: Not Supported 00:24:52.230 Virtualization Management: Not Supported 00:24:52.230 Doorbell Buffer Config: Not Supported 00:24:52.230 Get LBA Status Capability: Not Supported 00:24:52.230 Command & Feature Lockdown Capability: Not Supported 00:24:52.230 Abort Command Limit: 4 00:24:52.230 Async Event Request Limit: 4 00:24:52.230 Number of Firmware Slots: N/A 00:24:52.230 Firmware Slot 1 Read-Only: N/A 00:24:52.230 Firmware Activation Without Reset: N/A 00:24:52.230 Multiple Update Detection Support: N/A 00:24:52.230 Firmware Update Granularity: No Information Provided 00:24:52.230 Per-Namespace SMART Log: No 00:24:52.230 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.230 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:52.230 Command Effects Log Page: Supported 00:24:52.230 Get Log Page Extended Data: Supported 00:24:52.230 Telemetry Log Pages: Not Supported 00:24:52.230 Persistent Event Log Pages: Not Supported 00:24:52.230 Supported Log Pages Log Page: May Support 00:24:52.230 Commands Supported & Effects Log Page: Not Supported 00:24:52.230 Feature Identifiers & Effects Log Page:May Support 00:24:52.230 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.230 Data Area 4 for Telemetry Log: Not Supported 00:24:52.230 Error Log Page Entries Supported: 128 00:24:52.230 Keep Alive: Supported 00:24:52.230 Keep Alive Granularity: 10000 ms 00:24:52.230 00:24:52.230 NVM Command Set Attributes 00:24:52.230 ========================== 00:24:52.230 Submission Queue Entry Size 00:24:52.230 Max: 64 00:24:52.230 Min: 64 00:24:52.230 Completion Queue Entry Size 00:24:52.230 Max: 16 00:24:52.230 Min: 16 00:24:52.230 Number of Namespaces: 32 00:24:52.230 Compare Command: Supported 00:24:52.230 Write Uncorrectable Command: Not Supported 00:24:52.230 Dataset Management Command: Supported 00:24:52.230 Write Zeroes Command: Supported 00:24:52.230 Set Features Save Field: Not Supported 00:24:52.230 Reservations: Supported 00:24:52.230 Timestamp: Not Supported 00:24:52.230 Copy: Supported 00:24:52.230 Volatile Write Cache: Present 00:24:52.230 Atomic Write Unit (Normal): 1 00:24:52.230 Atomic Write Unit (PFail): 1 00:24:52.230 Atomic Compare & Write Unit: 1 00:24:52.230 Fused Compare & Write: Supported 00:24:52.230 Scatter-Gather List 00:24:52.230 SGL Command Set: Supported 00:24:52.230 SGL Keyed: Supported 00:24:52.230 SGL Bit Bucket Descriptor: Not Supported 00:24:52.230 SGL Metadata Pointer: Not Supported 00:24:52.230 Oversized SGL: Not Supported 00:24:52.230 SGL Metadata Address: Not Supported 00:24:52.230 SGL Offset: Supported 00:24:52.230 Transport SGL Data Block: Not Supported 00:24:52.230 Replay Protected Memory Block: Not Supported 00:24:52.230 00:24:52.230 Firmware Slot Information 00:24:52.230 ========================= 00:24:52.230 Active slot: 1 00:24:52.230 Slot 1 Firmware Revision: 25.01 00:24:52.230 00:24:52.230 00:24:52.230 Commands Supported and Effects 00:24:52.230 ============================== 00:24:52.230 Admin Commands 00:24:52.230 -------------- 00:24:52.230 Get Log Page (02h): Supported 00:24:52.230 Identify (06h): Supported 00:24:52.230 Abort (08h): Supported 00:24:52.230 Set Features (09h): Supported 00:24:52.230 Get Features (0Ah): Supported 00:24:52.230 Asynchronous Event Request (0Ch): Supported 00:24:52.230 Keep Alive (18h): Supported 00:24:52.230 I/O Commands 00:24:52.230 ------------ 00:24:52.230 Flush (00h): Supported LBA-Change 00:24:52.230 Write (01h): Supported LBA-Change 00:24:52.230 Read (02h): Supported 00:24:52.230 Compare (05h): Supported 00:24:52.230 Write Zeroes (08h): Supported LBA-Change 00:24:52.230 Dataset Management (09h): Supported LBA-Change 00:24:52.230 Copy (19h): Supported LBA-Change 00:24:52.230 00:24:52.230 Error Log 00:24:52.230 ========= 00:24:52.230 00:24:52.230 Arbitration 00:24:52.230 =========== 00:24:52.230 Arbitration Burst: 1 00:24:52.230 00:24:52.230 Power Management 00:24:52.230 ================ 00:24:52.230 Number of Power States: 1 00:24:52.230 Current Power State: Power State #0 00:24:52.230 Power State #0: 00:24:52.230 Max Power: 0.00 W 00:24:52.230 Non-Operational State: Operational 00:24:52.230 Entry Latency: Not Reported 00:24:52.230 Exit Latency: Not Reported 00:24:52.230 Relative Read Throughput: 0 00:24:52.230 Relative Read Latency: 0 00:24:52.230 Relative Write Throughput: 0 00:24:52.230 Relative Write Latency: 0 00:24:52.230 Idle Power: Not Reported 00:24:52.230 Active Power: Not Reported 00:24:52.230 Non-Operational Permissive Mode: Not Supported 00:24:52.230 00:24:52.230 Health Information 00:24:52.230 ================== 00:24:52.230 Critical Warnings: 00:24:52.230 Available Spare Space: OK 00:24:52.230 Temperature: OK 00:24:52.230 Device Reliability: OK 00:24:52.230 Read Only: No 00:24:52.230 Volatile Memory Backup: OK 00:24:52.230 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:52.230 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:52.230 Available Spare: 0% 00:24:52.230 Available Spare Threshold: 0% 00:24:52.230 Life Percentage Used:[2024-12-09 05:19:34.657977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.230 [2024-12-09 05:19:34.657982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeac690) 00:24:52.230 [2024-12-09 05:19:34.657990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.230 [2024-12-09 05:19:34.658003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0eb80, cid 7, qid 0 00:24:52.230 [2024-12-09 05:19:34.658077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.230 [2024-12-09 05:19:34.658084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.230 [2024-12-09 05:19:34.658089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.230 [2024-12-09 05:19:34.658093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0eb80) on tqpair=0xeac690 00:24:52.230 [2024-12-09 05:19:34.658128] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:52.231 [2024-12-09 05:19:34.658139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e100) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.658146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.231 [2024-12-09 05:19:34.658152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e280) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.658158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.231 [2024-12-09 05:19:34.658164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e400) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.658169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.231 [2024-12-09 05:19:34.658175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.658180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.231 [2024-12-09 05:19:34.658189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.658193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.658198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.658205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.662403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662519] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:52.231 [2024-12-09 05:19:34.662525] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:52.231 [2024-12-09 05:19:34.662535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.662551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.662665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.662780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.662899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.662912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.662972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.662978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.662983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.662987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.662997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.663013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.663024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.663091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.663098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.663102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.663116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.663132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.663144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.663205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.663216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.663220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.663234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.663250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.663262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.663323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.663330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.663334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.663348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.663364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.663376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.231 [2024-12-09 05:19:34.663440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.231 [2024-12-09 05:19:34.663447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.231 [2024-12-09 05:19:34.663451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.231 [2024-12-09 05:19:34.663465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.231 [2024-12-09 05:19:34.663474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.231 [2024-12-09 05:19:34.663481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.231 [2024-12-09 05:19:34.663492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.663551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.663558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.663562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.663576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.663592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.663604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.663672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.663678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.663683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.663697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.663713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.663724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.663783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.663790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.663794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.663808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.663824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.663836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.663900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.663907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.663913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.663927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.663936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.663943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.663955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.664026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.664033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.664037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.664052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.664068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.664079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.664142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.664149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.664153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.664167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.664183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.664195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.664258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.664265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.664269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.664283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.664293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.664300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.664311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.668215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.668224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.668228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.668235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.668247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.668252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.668256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeac690) 00:24:52.232 [2024-12-09 05:19:34.668263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.232 [2024-12-09 05:19:34.668276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf0e580, cid 3, qid 0 00:24:52.232 [2024-12-09 05:19:34.668412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:52.232 [2024-12-09 05:19:34.668418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:52.232 [2024-12-09 05:19:34.668423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:52.232 [2024-12-09 05:19:34.668427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf0e580) on tqpair=0xeac690 00:24:52.232 [2024-12-09 05:19:34.668435] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:52.492 0% 00:24:52.492 Data Units Read: 0 00:24:52.492 Data Units Written: 0 00:24:52.492 Host Read Commands: 0 00:24:52.492 Host Write Commands: 0 00:24:52.492 Controller Busy Time: 0 minutes 00:24:52.492 Power Cycles: 0 00:24:52.492 Power On Hours: 0 hours 00:24:52.492 Unsafe Shutdowns: 0 00:24:52.492 Unrecoverable Media Errors: 0 00:24:52.492 Lifetime Error Log Entries: 0 00:24:52.492 Warning Temperature Time: 0 minutes 00:24:52.492 Critical Temperature Time: 0 minutes 00:24:52.492 00:24:52.492 Number of Queues 00:24:52.492 ================ 00:24:52.492 Number of I/O Submission Queues: 127 00:24:52.492 Number of I/O Completion Queues: 127 00:24:52.492 00:24:52.492 Active Namespaces 00:24:52.492 ================= 00:24:52.492 Namespace ID:1 00:24:52.492 Error Recovery Timeout: Unlimited 00:24:52.492 Command Set Identifier: NVM (00h) 00:24:52.492 Deallocate: Supported 00:24:52.492 Deallocated/Unwritten Error: Not Supported 00:24:52.492 Deallocated Read Value: Unknown 00:24:52.492 Deallocate in Write Zeroes: Not Supported 00:24:52.492 Deallocated Guard Field: 0xFFFF 00:24:52.492 Flush: Supported 00:24:52.492 Reservation: Supported 00:24:52.492 Namespace Sharing Capabilities: Multiple Controllers 00:24:52.493 Size (in LBAs): 131072 (0GiB) 00:24:52.493 Capacity (in LBAs): 131072 (0GiB) 00:24:52.493 Utilization (in LBAs): 131072 (0GiB) 00:24:52.493 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:52.493 EUI64: ABCDEF0123456789 00:24:52.493 UUID: 233b9d09-fd06-4afe-a84d-269ec90f6c1a 00:24:52.493 Thin Provisioning: Not Supported 00:24:52.493 Per-NS Atomic Units: Yes 00:24:52.493 Atomic Boundary Size (Normal): 0 00:24:52.493 Atomic Boundary Size (PFail): 0 00:24:52.493 Atomic Boundary Offset: 0 00:24:52.493 Maximum Single Source Range Length: 65535 00:24:52.493 Maximum Copy Length: 65535 00:24:52.493 Maximum Source Range Count: 1 00:24:52.493 NGUID/EUI64 Never Reused: No 00:24:52.493 Namespace Write Protected: No 00:24:52.493 Number of LBA Formats: 1 00:24:52.493 Current LBA Format: LBA Format #00 00:24:52.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:52.493 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.493 rmmod nvme_tcp 00:24:52.493 rmmod nvme_fabrics 00:24:52.493 rmmod nvme_keyring 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 576653 ']' 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 576653 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 576653 ']' 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 576653 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 576653 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 576653' 00:24:52.493 killing process with pid 576653 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 576653 00:24:52.493 05:19:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 576653 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.753 05:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.287 00:24:55.287 real 0m11.593s 00:24:55.287 user 0m8.812s 00:24:55.287 sys 0m6.259s 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:55.287 ************************************ 00:24:55.287 END TEST nvmf_identify 00:24:55.287 ************************************ 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.287 ************************************ 00:24:55.287 START TEST nvmf_perf 00:24:55.287 ************************************ 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:55.287 * Looking for test storage... 00:24:55.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:55.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.287 --rc genhtml_branch_coverage=1 00:24:55.287 --rc genhtml_function_coverage=1 00:24:55.287 --rc genhtml_legend=1 00:24:55.287 --rc geninfo_all_blocks=1 00:24:55.287 --rc geninfo_unexecuted_blocks=1 00:24:55.287 00:24:55.287 ' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:55.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.287 --rc genhtml_branch_coverage=1 00:24:55.287 --rc genhtml_function_coverage=1 00:24:55.287 --rc genhtml_legend=1 00:24:55.287 --rc geninfo_all_blocks=1 00:24:55.287 --rc geninfo_unexecuted_blocks=1 00:24:55.287 00:24:55.287 ' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:55.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.287 --rc genhtml_branch_coverage=1 00:24:55.287 --rc genhtml_function_coverage=1 00:24:55.287 --rc genhtml_legend=1 00:24:55.287 --rc geninfo_all_blocks=1 00:24:55.287 --rc geninfo_unexecuted_blocks=1 00:24:55.287 00:24:55.287 ' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:55.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.287 --rc genhtml_branch_coverage=1 00:24:55.287 --rc genhtml_function_coverage=1 00:24:55.287 --rc genhtml_legend=1 00:24:55.287 --rc geninfo_all_blocks=1 00:24:55.287 --rc geninfo_unexecuted_blocks=1 00:24:55.287 00:24:55.287 ' 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.287 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:55.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:55.288 05:19:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.409 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:03.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:03.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:03.410 Found net devices under 0000:af:00.0: cvl_0_0 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:03.410 Found net devices under 0000:af:00.1: cvl_0_1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:25:03.410 00:25:03.410 --- 10.0.0.2 ping statistics --- 00:25:03.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.410 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:25:03.410 00:25:03.410 --- 10.0.0.1 ping statistics --- 00:25:03.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.410 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=580642 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 580642 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 580642 ']' 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.410 05:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.410 [2024-12-09 05:19:44.899043] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:25:03.410 [2024-12-09 05:19:44.899090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.410 [2024-12-09 05:19:44.996382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.410 [2024-12-09 05:19:45.034452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.410 [2024-12-09 05:19:45.034494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.410 [2024-12-09 05:19:45.034503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.411 [2024-12-09 05:19:45.034511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.411 [2024-12-09 05:19:45.034517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.411 [2024-12-09 05:19:45.036345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.411 [2024-12-09 05:19:45.036459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.411 [2024-12-09 05:19:45.036545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.411 [2024-12-09 05:19:45.036546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:03.411 05:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:06.698 05:19:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:06.698 05:19:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:06.698 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:06.698 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:06.955 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:06.955 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:06.955 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:06.955 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:06.955 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:07.213 [2024-12-09 05:19:49.459342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.213 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.471 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:07.471 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.471 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:07.471 05:19:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:07.729 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.987 [2024-12-09 05:19:50.318473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.987 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:08.245 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:08.245 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:08.245 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:08.245 05:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:09.620 Initializing NVMe Controllers 00:25:09.620 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:09.620 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:09.620 Initialization complete. Launching workers. 00:25:09.620 ======================================================== 00:25:09.620 Latency(us) 00:25:09.620 Device Information : IOPS MiB/s Average min max 00:25:09.620 PCIE (0000:d8:00.0) NSID 1 from core 0: 101864.26 397.91 313.71 34.58 6044.67 00:25:09.620 ======================================================== 00:25:09.620 Total : 101864.26 397.91 313.71 34.58 6044.67 00:25:09.620 00:25:09.620 05:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.999 Initializing NVMe Controllers 00:25:10.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.999 Initialization complete. Launching workers. 00:25:10.999 ======================================================== 00:25:10.999 Latency(us) 00:25:10.999 Device Information : IOPS MiB/s Average min max 00:25:10.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.00 0.36 11216.91 106.71 45666.31 00:25:10.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.00 0.16 25917.51 6549.22 50878.87 00:25:10.999 ======================================================== 00:25:10.999 Total : 131.00 0.51 15705.64 106.71 50878.87 00:25:10.999 00:25:11.258 05:19:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:12.648 Initializing NVMe Controllers 00:25:12.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:12.648 Initialization complete. Launching workers. 00:25:12.648 ======================================================== 00:25:12.648 Latency(us) 00:25:12.648 Device Information : IOPS MiB/s Average min max 00:25:12.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11468.44 44.80 2800.29 440.10 6158.44 00:25:12.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3835.81 14.98 8377.05 5508.41 15834.72 00:25:12.648 ======================================================== 00:25:12.649 Total : 15304.25 59.78 4198.03 440.10 15834.72 00:25:12.649 00:25:12.649 05:19:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:12.649 05:19:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:12.649 05:19:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.186 Initializing NVMe Controllers 00:25:15.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.186 Controller IO queue size 128, less than required. 00:25:15.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.186 Controller IO queue size 128, less than required. 00:25:15.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:15.186 Initialization complete. Launching workers. 00:25:15.186 ======================================================== 00:25:15.186 Latency(us) 00:25:15.187 Device Information : IOPS MiB/s Average min max 00:25:15.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1841.14 460.29 70450.41 44922.58 112938.18 00:25:15.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.38 154.84 214056.32 80148.27 322263.62 00:25:15.187 ======================================================== 00:25:15.187 Total : 2460.52 615.13 106599.69 44922.58 322263.62 00:25:15.187 00:25:15.187 05:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:15.187 No valid NVMe controllers or AIO or URING devices found 00:25:15.187 Initializing NVMe Controllers 00:25:15.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.187 Controller IO queue size 128, less than required. 00:25:15.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:15.187 Controller IO queue size 128, less than required. 00:25:15.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:15.187 WARNING: Some requested NVMe devices were skipped 00:25:15.187 05:19:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:18.476 Initializing NVMe Controllers 00:25:18.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.476 Controller IO queue size 128, less than required. 00:25:18.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:18.476 Controller IO queue size 128, less than required. 00:25:18.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:18.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:18.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:18.476 Initialization complete. Launching workers. 00:25:18.476 00:25:18.476 ==================== 00:25:18.476 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:18.476 TCP transport: 00:25:18.476 polls: 15594 00:25:18.476 idle_polls: 11150 00:25:18.476 sock_completions: 4444 00:25:18.476 nvme_completions: 6369 00:25:18.476 submitted_requests: 9484 00:25:18.476 queued_requests: 1 00:25:18.476 00:25:18.476 ==================== 00:25:18.476 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:18.476 TCP transport: 00:25:18.476 polls: 15678 00:25:18.476 idle_polls: 11338 00:25:18.476 sock_completions: 4340 00:25:18.476 nvme_completions: 6523 00:25:18.476 submitted_requests: 9764 00:25:18.476 queued_requests: 1 00:25:18.476 ======================================================== 00:25:18.476 Latency(us) 00:25:18.476 Device Information : IOPS MiB/s Average min max 00:25:18.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1590.28 397.57 82505.20 38470.12 135503.37 00:25:18.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1628.74 407.19 78950.23 38180.33 120333.29 00:25:18.476 ======================================================== 00:25:18.476 Total : 3219.03 804.76 80706.48 38180.33 135503.37 00:25:18.476 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.476 rmmod nvme_tcp 00:25:18.476 rmmod nvme_fabrics 00:25:18.476 rmmod nvme_keyring 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 580642 ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 580642 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 580642 ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 580642 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580642 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580642' 00:25:18.476 killing process with pid 580642 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 580642 00:25:18.476 05:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 580642 00:25:20.380 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.380 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.380 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.380 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.640 05:20:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.548 00:25:22.548 real 0m27.623s 00:25:22.548 user 1m10.894s 00:25:22.548 sys 0m9.963s 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:22.548 ************************************ 00:25:22.548 END TEST nvmf_perf 00:25:22.548 ************************************ 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.548 05:20:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.812 ************************************ 00:25:22.812 START TEST nvmf_fio_host 00:25:22.812 ************************************ 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:22.812 * Looking for test storage... 00:25:22.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.812 --rc genhtml_branch_coverage=1 00:25:22.812 --rc genhtml_function_coverage=1 00:25:22.812 --rc genhtml_legend=1 00:25:22.812 --rc geninfo_all_blocks=1 00:25:22.812 --rc geninfo_unexecuted_blocks=1 00:25:22.812 00:25:22.812 ' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.812 --rc genhtml_branch_coverage=1 00:25:22.812 --rc genhtml_function_coverage=1 00:25:22.812 --rc genhtml_legend=1 00:25:22.812 --rc geninfo_all_blocks=1 00:25:22.812 --rc geninfo_unexecuted_blocks=1 00:25:22.812 00:25:22.812 ' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.812 --rc genhtml_branch_coverage=1 00:25:22.812 --rc genhtml_function_coverage=1 00:25:22.812 --rc genhtml_legend=1 00:25:22.812 --rc geninfo_all_blocks=1 00:25:22.812 --rc geninfo_unexecuted_blocks=1 00:25:22.812 00:25:22.812 ' 00:25:22.812 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.813 --rc genhtml_branch_coverage=1 00:25:22.813 --rc genhtml_function_coverage=1 00:25:22.813 --rc genhtml_legend=1 00:25:22.813 --rc geninfo_all_blocks=1 00:25:22.813 --rc geninfo_unexecuted_blocks=1 00:25:22.813 00:25:22.813 ' 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.813 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.814 05:20:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:30.945 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.946 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.946 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.946 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:25:30.947 00:25:30.947 --- 10.0.0.2 ping statistics --- 00:25:30.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.947 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:25:30.947 00:25:30.947 --- 10.0.0.1 ping statistics --- 00:25:30.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.947 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=587328 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 587328 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 587328 ']' 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.947 05:20:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.947 [2024-12-09 05:20:12.620481] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:25:30.947 [2024-12-09 05:20:12.620525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.947 [2024-12-09 05:20:12.714503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.947 [2024-12-09 05:20:12.755807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.947 [2024-12-09 05:20:12.755847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.947 [2024-12-09 05:20:12.755857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.947 [2024-12-09 05:20:12.755866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.947 [2024-12-09 05:20:12.755873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.947 [2024-12-09 05:20:12.757653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.947 [2024-12-09 05:20:12.757763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.947 [2024-12-09 05:20:12.757792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.947 [2024-12-09 05:20:12.757794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:31.207 [2024-12-09 05:20:13.619407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.207 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.466 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:31.466 Malloc1 00:25:31.466 05:20:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.725 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:31.985 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.245 [2024-12-09 05:20:14.463735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:32.245 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:32.531 05:20:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:32.797 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:32.797 fio-3.35 00:25:32.797 Starting 1 thread 00:25:35.350 00:25:35.350 test: (groupid=0, jobs=1): err= 0: pid=588012: Mon Dec 9 05:20:17 2024 00:25:35.350 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(95.2MiB/2005msec) 00:25:35.350 slat (nsec): min=1486, max=257007, avg=1611.75, stdev=2248.66 00:25:35.350 clat (usec): min=3109, max=9969, avg=5820.53, stdev=434.85 00:25:35.350 lat (usec): min=3144, max=9970, avg=5822.14, stdev=434.80 00:25:35.350 clat percentiles (usec): 00:25:35.350 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:25:35.350 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5800], 60.00th=[ 5932], 00:25:35.350 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6456], 00:25:35.350 | 99.00th=[ 6783], 99.50th=[ 6849], 99.90th=[ 8029], 99.95th=[ 9241], 00:25:35.350 | 99.99th=[ 9896] 00:25:35.350 bw ( KiB/s): min=47688, max=49168, per=99.98%, avg=48622.00, stdev=690.55, samples=4 00:25:35.350 iops : min=11922, max=12292, avg=12155.50, stdev=172.64, samples=4 00:25:35.350 write: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(94.9MiB/2005msec); 0 zone resets 00:25:35.350 slat (nsec): min=1527, max=228845, avg=1674.91, stdev=1621.02 00:25:35.350 clat (usec): min=2434, max=9394, avg=4682.02, stdev=362.46 00:25:35.350 lat (usec): min=2449, max=9396, avg=4683.69, stdev=362.52 00:25:35.350 clat percentiles (usec): 00:25:35.350 | 1.00th=[ 3851], 5.00th=[ 4113], 10.00th=[ 4228], 20.00th=[ 4424], 00:25:35.350 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:25:35.350 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:25:35.350 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 8029], 00:25:35.350 | 99.99th=[ 9372] 00:25:35.350 bw ( KiB/s): min=48256, max=48960, per=100.00%, avg=48464.00, stdev=332.04, samples=4 00:25:35.350 iops : min=12064, max=12240, avg=12116.00, stdev=83.01, samples=4 00:25:35.350 lat (msec) : 4=1.25%, 10=98.75% 00:25:35.350 cpu : usr=71.46%, sys=27.50%, ctx=107, majf=0, minf=2 00:25:35.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:35.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:35.350 issued rwts: total=24377,24293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:35.350 00:25:35.350 Run status group 0 (all jobs): 00:25:35.350 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=95.2MiB (99.8MB), run=2005-2005msec 00:25:35.350 WRITE: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=94.9MiB (99.5MB), run=2005-2005msec 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:35.350 05:20:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:35.611 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:35.611 fio-3.35 00:25:35.611 Starting 1 thread 00:25:36.544 [2024-12-09 05:20:18.693762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddfd70 is same with the state(6) to be set 00:25:37.915 00:25:37.915 test: (groupid=0, jobs=1): err= 0: pid=588662: Mon Dec 9 05:20:20 2024 00:25:37.915 read: IOPS=11.1k, BW=174MiB/s (182MB/s)(348MiB/2005msec) 00:25:37.915 slat (nsec): min=2350, max=91590, avg=2638.95, stdev=1228.66 00:25:37.915 clat (usec): min=1763, max=49585, avg=6746.45, stdev=3309.68 00:25:37.915 lat (usec): min=1765, max=49588, avg=6749.09, stdev=3309.75 00:25:37.915 clat percentiles (usec): 00:25:37.915 | 1.00th=[ 3523], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:25:37.915 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6587], 60.00th=[ 6980], 00:25:37.915 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8225], 95.00th=[ 9110], 00:25:37.915 | 99.00th=[11207], 99.50th=[43254], 99.90th=[48497], 99.95th=[49021], 00:25:37.915 | 99.99th=[49546] 00:25:37.915 bw ( KiB/s): min=72512, max=98560, per=50.50%, avg=89856.00, stdev=11994.61, samples=4 00:25:37.915 iops : min= 4532, max= 6160, avg=5616.00, stdev=749.66, samples=4 00:25:37.915 write: IOPS=6863, BW=107MiB/s (112MB/s)(184MiB/1717msec); 0 zone resets 00:25:37.915 slat (usec): min=27, max=360, avg=29.88, stdev= 7.27 00:25:37.915 clat (usec): min=3949, max=15407, avg=8330.53, stdev=1477.43 00:25:37.915 lat (usec): min=3978, max=15519, avg=8360.41, stdev=1479.44 00:25:37.915 clat percentiles (usec): 00:25:37.915 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7111], 00:25:37.915 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8455], 00:25:37.915 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10290], 95.00th=[10945], 00:25:37.915 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15008], 99.95th=[15139], 00:25:37.915 | 99.99th=[15270] 00:25:37.915 bw ( KiB/s): min=76352, max=103360, per=85.62%, avg=94024.00, stdev=12698.11, samples=4 00:25:37.915 iops : min= 4772, max= 6460, avg=5876.50, stdev=793.63, samples=4 00:25:37.915 lat (msec) : 2=0.02%, 4=1.98%, 10=91.86%, 20=5.77%, 50=0.37% 00:25:37.915 cpu : usr=84.13%, sys=15.07%, ctx=75, majf=0, minf=2 00:25:37.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:37.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:37.915 issued rwts: total=22297,11785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:37.915 00:25:37.915 Run status group 0 (all jobs): 00:25:37.915 READ: bw=174MiB/s (182MB/s), 174MiB/s-174MiB/s (182MB/s-182MB/s), io=348MiB (365MB), run=2005-2005msec 00:25:37.915 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=184MiB (193MB), run=1717-1717msec 00:25:37.915 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.172 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:38.172 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:38.172 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.173 rmmod nvme_tcp 00:25:38.173 rmmod nvme_fabrics 00:25:38.173 rmmod nvme_keyring 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 587328 ']' 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 587328 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 587328 ']' 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 587328 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.173 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587328 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587328' 00:25:38.431 killing process with pid 587328 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 587328 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 587328 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.431 05:20:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.975 05:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.975 00:25:40.975 real 0m17.944s 00:25:40.975 user 0m55.212s 00:25:40.975 sys 0m8.007s 00:25:40.975 05:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:40.975 05:20:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.975 ************************************ 00:25:40.975 END TEST nvmf_fio_host 00:25:40.975 ************************************ 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.976 ************************************ 00:25:40.976 START TEST nvmf_failover 00:25:40.976 ************************************ 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:40.976 * Looking for test storage... 00:25:40.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:40.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.976 --rc genhtml_branch_coverage=1 00:25:40.976 --rc genhtml_function_coverage=1 00:25:40.976 --rc genhtml_legend=1 00:25:40.976 --rc geninfo_all_blocks=1 00:25:40.976 --rc geninfo_unexecuted_blocks=1 00:25:40.976 00:25:40.976 ' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:40.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.976 --rc genhtml_branch_coverage=1 00:25:40.976 --rc genhtml_function_coverage=1 00:25:40.976 --rc genhtml_legend=1 00:25:40.976 --rc geninfo_all_blocks=1 00:25:40.976 --rc geninfo_unexecuted_blocks=1 00:25:40.976 00:25:40.976 ' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:40.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.976 --rc genhtml_branch_coverage=1 00:25:40.976 --rc genhtml_function_coverage=1 00:25:40.976 --rc genhtml_legend=1 00:25:40.976 --rc geninfo_all_blocks=1 00:25:40.976 --rc geninfo_unexecuted_blocks=1 00:25:40.976 00:25:40.976 ' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:40.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.976 --rc genhtml_branch_coverage=1 00:25:40.976 --rc genhtml_function_coverage=1 00:25:40.976 --rc genhtml_legend=1 00:25:40.976 --rc geninfo_all_blocks=1 00:25:40.976 --rc geninfo_unexecuted_blocks=1 00:25:40.976 00:25:40.976 ' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:40.976 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.977 05:20:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:49.293 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:49.294 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:49.294 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:49.294 Found net devices under 0000:af:00.0: cvl_0_0 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:49.294 Found net devices under 0000:af:00.1: cvl_0_1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:25:49.294 00:25:49.294 --- 10.0.0.2 ping statistics --- 00:25:49.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.294 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:25:49.294 00:25:49.294 --- 10.0.0.1 ping statistics --- 00:25:49.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.294 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=592658 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 592658 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 592658 ']' 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.294 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.295 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.295 05:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.295 [2024-12-09 05:20:30.620533] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:25:49.295 [2024-12-09 05:20:30.620584] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.295 [2024-12-09 05:20:30.718923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.295 [2024-12-09 05:20:30.757603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.295 [2024-12-09 05:20:30.757644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.295 [2024-12-09 05:20:30.757654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.295 [2024-12-09 05:20:30.757662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.295 [2024-12-09 05:20:30.757685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.295 [2024-12-09 05:20:30.759277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.295 [2024-12-09 05:20:30.759396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.295 [2024-12-09 05:20:30.759398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.295 [2024-12-09 05:20:31.667722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.295 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.607 Malloc0 00:25:49.607 05:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:49.865 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.865 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.123 [2024-12-09 05:20:32.486321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.123 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.381 [2024-12-09 05:20:32.690922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.382 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:50.640 [2024-12-09 05:20:32.891580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=593204 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 593204 /var/tmp/bdevperf.sock 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 593204 ']' 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.640 05:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:51.574 05:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:51.574 05:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:51.574 05:20:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:51.833 NVMe0n1 00:25:51.833 05:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:52.091 00:25:52.091 05:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=593470 00:25:52.091 05:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:52.091 05:20:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:53.467 05:20:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.467 [2024-12-09 05:20:35.720088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.467 [2024-12-09 05:20:35.720163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.467 [2024-12-09 05:20:35.720173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 [2024-12-09 05:20:35.720961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5e390 is same with the state(6) to be set 00:25:53.468 05:20:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:56.756 05:20:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:56.756 00:25:56.756 05:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.014 [2024-12-09 05:20:39.256042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 [2024-12-09 05:20:39.256334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5eff0 is same with the state(6) to be set 00:25:57.015 05:20:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:00.296 05:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.296 [2024-12-09 05:20:42.469543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.296 05:20:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:01.230 05:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:01.230 [2024-12-09 05:20:43.685396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.230 [2024-12-09 05:20:43.685837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.685994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.231 [2024-12-09 05:20:43.686074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdab300 is same with the state(6) to be set 00:26:01.489 05:20:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 593470 00:26:08.055 { 00:26:08.055 "results": [ 00:26:08.055 { 00:26:08.055 "job": "NVMe0n1", 00:26:08.055 "core_mask": "0x1", 00:26:08.055 "workload": "verify", 00:26:08.055 "status": "finished", 00:26:08.055 "verify_range": { 00:26:08.055 "start": 0, 00:26:08.055 "length": 16384 00:26:08.055 }, 00:26:08.055 "queue_depth": 128, 00:26:08.055 "io_size": 4096, 00:26:08.055 "runtime": 15.007624, 00:26:08.055 "iops": 11373.619168497291, 00:26:08.055 "mibps": 44.42819987694254, 00:26:08.055 "io_failed": 10549, 00:26:08.055 "io_timeout": 0, 00:26:08.055 "avg_latency_us": 10577.413898565439, 00:26:08.055 "min_latency_us": 606.208, 00:26:08.055 "max_latency_us": 24536.6784 00:26:08.055 } 00:26:08.055 ], 00:26:08.055 "core_count": 1 00:26:08.055 } 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 593204 ']' 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 593204' 00:26:08.055 killing process with pid 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 593204 00:26:08.055 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.055 [2024-12-09 05:20:32.955985] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:26:08.055 [2024-12-09 05:20:32.956040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593204 ] 00:26:08.055 [2024-12-09 05:20:33.047620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.055 [2024-12-09 05:20:33.087321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.055 Running I/O for 15 seconds... 00:26:08.055 11516.00 IOPS, 44.98 MiB/s [2024-12-09T04:20:50.525Z] [2024-12-09 05:20:35.721965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.055 [2024-12-09 05:20:35.722018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.055 [2024-12-09 05:20:35.722039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.055 [2024-12-09 05:20:35.722059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.055 [2024-12-09 05:20:35.722080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.055 [2024-12-09 05:20:35.722099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.055 [2024-12-09 05:20:35.722108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.056 [2024-12-09 05:20:35.722858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.056 [2024-12-09 05:20:35.722867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.722986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.722996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.723005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.723024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.057 [2024-12-09 05:20:35.723587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.057 [2024-12-09 05:20:35.723635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.057 [2024-12-09 05:20:35.723645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.723982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.723991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.058 [2024-12-09 05:20:35.724201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102136 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102144 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102152 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102168 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.058 [2024-12-09 05:20:35.724397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102176 len:8 PRP1 0x0 PRP2 0x0 00:26:08.058 [2024-12-09 05:20:35.724405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.058 [2024-12-09 05:20:35.724415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.058 [2024-12-09 05:20:35.724422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102184 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102192 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102200 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102208 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102216 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102224 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.724606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.724612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.724620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102232 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.724628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.059 [2024-12-09 05:20:35.738232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.059 [2024-12-09 05:20:35.738243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102240 len:8 PRP1 0x0 PRP2 0x0 00:26:08.059 [2024-12-09 05:20:35.738255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738313] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:08.059 [2024-12-09 05:20:35.738344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.059 [2024-12-09 05:20:35.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.059 [2024-12-09 05:20:35.738382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.059 [2024-12-09 05:20:35.738407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.059 [2024-12-09 05:20:35.738431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:35.738443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:08.059 [2024-12-09 05:20:35.738498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143c8c0 (9): Bad file descriptor 00:26:08.059 [2024-12-09 05:20:35.742118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:08.059 [2024-12-09 05:20:35.813742] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:08.059 11012.00 IOPS, 43.02 MiB/s [2024-12-09T04:20:50.529Z] 11177.67 IOPS, 43.66 MiB/s [2024-12-09T04:20:50.529Z] 11317.50 IOPS, 44.21 MiB/s [2024-12-09T04:20:50.529Z] [2024-12-09 05:20:39.257828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.257987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.257996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.059 [2024-12-09 05:20:39.258235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.059 [2024-12-09 05:20:39.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.060 [2024-12-09 05:20:39.258466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.060 [2024-12-09 05:20:39.258930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.060 [2024-12-09 05:20:39.258940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.258949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.258959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.258969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.258979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.258988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.258998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.061 [2024-12-09 05:20:39.259703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.061 [2024-12-09 05:20:39.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.062 [2024-12-09 05:20:39.259722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54928 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54936 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54944 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54960 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54968 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54976 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.259976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.259983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.259990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.259999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54992 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55000 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55008 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55016 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55024 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55032 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55040 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55048 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55072 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55080 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55088 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.062 [2024-12-09 05:20:39.260475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55104 len:8 PRP1 0x0 PRP2 0x0 00:26:08.062 [2024-12-09 05:20:39.260483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.062 [2024-12-09 05:20:39.260492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.062 [2024-12-09 05:20:39.260499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.260686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.260693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.260700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.271041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.271068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.271077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54392 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.271088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.063 [2024-12-09 05:20:39.271108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.063 [2024-12-09 05:20:39.271117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54400 len:8 PRP1 0x0 PRP2 0x0 00:26:08.063 [2024-12-09 05:20:39.271127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271177] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:08.063 [2024-12-09 05:20:39.271206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.063 [2024-12-09 05:20:39.271232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.063 [2024-12-09 05:20:39.271255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.063 [2024-12-09 05:20:39.271278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.063 [2024-12-09 05:20:39.271300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:39.271313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:08.063 [2024-12-09 05:20:39.271351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143c8c0 (9): Bad file descriptor 00:26:08.063 [2024-12-09 05:20:39.274708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:08.063 [2024-12-09 05:20:39.305873] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:08.063 11226.00 IOPS, 43.85 MiB/s [2024-12-09T04:20:50.533Z] 11291.67 IOPS, 44.11 MiB/s [2024-12-09T04:20:50.533Z] 11337.14 IOPS, 44.29 MiB/s [2024-12-09T04:20:50.533Z] 11369.25 IOPS, 44.41 MiB/s [2024-12-09T04:20:50.533Z] 11391.67 IOPS, 44.50 MiB/s [2024-12-09T04:20:50.533Z] [2024-12-09 05:20:43.687144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.063 [2024-12-09 05:20:43.687493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.063 [2024-12-09 05:20:43.687513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.063 [2024-12-09 05:20:43.687523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.063 [2024-12-09 05:20:43.687533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.687987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.687996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.064 [2024-12-09 05:20:43.688218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.064 [2024-12-09 05:20:43.688228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.065 [2024-12-09 05:20:43.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.065 [2024-12-09 05:20:43.688997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.066 [2024-12-09 05:20:43.689016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.689726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.689735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.689744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.066 [2024-12-09 05:20:43.689751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.066 [2024-12-09 05:20:43.701910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:26:08.066 [2024-12-09 05:20:43.701925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.066 [2024-12-09 05:20:43.701936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.701943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.701951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.701960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.701969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.701976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.701993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.067 [2024-12-09 05:20:43.702249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.067 [2024-12-09 05:20:43.702257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:26:08.067 [2024-12-09 05:20:43.702265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702313] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:08.067 [2024-12-09 05:20:43.702340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.067 [2024-12-09 05:20:43.702351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.067 [2024-12-09 05:20:43.702371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.067 [2024-12-09 05:20:43.702389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.067 [2024-12-09 05:20:43.702408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.067 [2024-12-09 05:20:43.702417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:08.067 [2024-12-09 05:20:43.702451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143c8c0 (9): Bad file descriptor 00:26:08.067 [2024-12-09 05:20:43.705479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:08.067 [2024-12-09 05:20:43.815662] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:08.067 11262.70 IOPS, 43.99 MiB/s [2024-12-09T04:20:50.537Z] 11300.45 IOPS, 44.14 MiB/s [2024-12-09T04:20:50.537Z] 11322.83 IOPS, 44.23 MiB/s [2024-12-09T04:20:50.537Z] 11345.38 IOPS, 44.32 MiB/s [2024-12-09T04:20:50.537Z] 11355.57 IOPS, 44.36 MiB/s [2024-12-09T04:20:50.537Z] 11375.40 IOPS, 44.44 MiB/s 00:26:08.067 Latency(us) 00:26:08.067 [2024-12-09T04:20:50.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.067 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:08.067 Verification LBA range: start 0x0 length 0x4000 00:26:08.067 NVMe0n1 : 15.01 11373.62 44.43 702.91 0.00 10577.41 606.21 24536.68 00:26:08.067 [2024-12-09T04:20:50.537Z] =================================================================================================================== 00:26:08.067 [2024-12-09T04:20:50.537Z] Total : 11373.62 44.43 702.91 0.00 10577.41 606.21 24536.68 00:26:08.067 Received shutdown signal, test time was about 15.000000 seconds 00:26:08.067 00:26:08.067 Latency(us) 00:26:08.067 [2024-12-09T04:20:50.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.067 [2024-12-09T04:20:50.537Z] =================================================================================================================== 00:26:08.067 [2024-12-09T04:20:50.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=596014 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 596014 /var/tmp/bdevperf.sock 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 596014 ']' 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.067 05:20:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:08.634 05:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.634 05:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:08.634 05:20:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:08.634 [2024-12-09 05:20:51.023549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:08.634 05:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:08.893 [2024-12-09 05:20:51.220042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:08.893 05:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.152 NVMe0n1 00:26:09.152 05:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.411 00:26:09.670 05:20:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:09.670 00:26:09.929 05:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.929 05:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:09.929 05:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.188 05:20:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:13.476 05:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:13.476 05:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:13.476 05:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=596963 00:26:13.476 05:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:13.476 05:20:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 596963 00:26:14.413 { 00:26:14.413 "results": [ 00:26:14.413 { 00:26:14.413 "job": "NVMe0n1", 00:26:14.413 "core_mask": "0x1", 00:26:14.413 "workload": "verify", 00:26:14.413 "status": "finished", 00:26:14.413 "verify_range": { 00:26:14.413 "start": 0, 00:26:14.413 "length": 16384 00:26:14.413 }, 00:26:14.413 "queue_depth": 128, 00:26:14.413 "io_size": 4096, 00:26:14.413 "runtime": 1.011786, 00:26:14.413 "iops": 11501.443981237138, 00:26:14.413 "mibps": 44.92751555170757, 00:26:14.413 "io_failed": 0, 00:26:14.413 "io_timeout": 0, 00:26:14.413 "avg_latency_us": 11087.16275878663, 00:26:14.413 "min_latency_us": 2228.224, 00:26:14.413 "max_latency_us": 13107.2 00:26:14.413 } 00:26:14.413 ], 00:26:14.413 "core_count": 1 00:26:14.413 } 00:26:14.413 05:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.413 [2024-12-09 05:20:50.016863] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:26:14.413 [2024-12-09 05:20:50.016918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596014 ] 00:26:14.413 [2024-12-09 05:20:50.113860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.413 [2024-12-09 05:20:50.153646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.413 [2024-12-09 05:20:52.497980] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:14.413 [2024-12-09 05:20:52.498024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.413 [2024-12-09 05:20:52.498038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.413 [2024-12-09 05:20:52.498048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.413 [2024-12-09 05:20:52.498058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.413 [2024-12-09 05:20:52.498067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.413 [2024-12-09 05:20:52.498076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.413 [2024-12-09 05:20:52.498086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.413 [2024-12-09 05:20:52.498095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.413 [2024-12-09 05:20:52.498104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:14.413 [2024-12-09 05:20:52.498131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:14.413 [2024-12-09 05:20:52.498147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209c8c0 (9): Bad file descriptor 00:26:14.413 [2024-12-09 05:20:52.560665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:14.413 Running I/O for 1 seconds... 00:26:14.413 11509.00 IOPS, 44.96 MiB/s 00:26:14.413 Latency(us) 00:26:14.413 [2024-12-09T04:20:56.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.413 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:14.413 Verification LBA range: start 0x0 length 0x4000 00:26:14.413 NVMe0n1 : 1.01 11501.44 44.93 0.00 0.00 11087.16 2228.22 13107.20 00:26:14.413 [2024-12-09T04:20:56.883Z] =================================================================================================================== 00:26:14.413 [2024-12-09T04:20:56.883Z] Total : 11501.44 44.93 0.00 0.00 11087.16 2228.22 13107.20 00:26:14.413 05:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.413 05:20:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:14.672 05:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:14.930 05:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:14.930 05:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.190 05:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.190 05:20:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 596014 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 596014 ']' 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 596014 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 596014 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.480 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.740 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 596014' 00:26:18.740 killing process with pid 596014 00:26:18.740 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 596014 00:26:18.740 05:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 596014 00:26:18.740 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:18.740 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.000 rmmod nvme_tcp 00:26:19.000 rmmod nvme_fabrics 00:26:19.000 rmmod nvme_keyring 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 592658 ']' 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 592658 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 592658 ']' 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 592658 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.000 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592658 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592658' 00:26:19.283 killing process with pid 592658 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 592658 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 592658 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.283 05:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.823 00:26:21.823 real 0m40.733s 00:26:21.823 user 2m5.039s 00:26:21.823 sys 0m10.268s 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:21.823 ************************************ 00:26:21.823 END TEST nvmf_failover 00:26:21.823 ************************************ 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.823 ************************************ 00:26:21.823 START TEST nvmf_host_discovery 00:26:21.823 ************************************ 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.823 * Looking for test storage... 00:26:21.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:21.823 05:21:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:21.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.823 --rc genhtml_branch_coverage=1 00:26:21.823 --rc genhtml_function_coverage=1 00:26:21.823 --rc genhtml_legend=1 00:26:21.823 --rc geninfo_all_blocks=1 00:26:21.823 --rc geninfo_unexecuted_blocks=1 00:26:21.823 00:26:21.823 ' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:21.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.823 --rc genhtml_branch_coverage=1 00:26:21.823 --rc genhtml_function_coverage=1 00:26:21.823 --rc genhtml_legend=1 00:26:21.823 --rc geninfo_all_blocks=1 00:26:21.823 --rc geninfo_unexecuted_blocks=1 00:26:21.823 00:26:21.823 ' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:21.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.823 --rc genhtml_branch_coverage=1 00:26:21.823 --rc genhtml_function_coverage=1 00:26:21.823 --rc genhtml_legend=1 00:26:21.823 --rc geninfo_all_blocks=1 00:26:21.823 --rc geninfo_unexecuted_blocks=1 00:26:21.823 00:26:21.823 ' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:21.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.823 --rc genhtml_branch_coverage=1 00:26:21.823 --rc genhtml_function_coverage=1 00:26:21.823 --rc genhtml_legend=1 00:26:21.823 --rc geninfo_all_blocks=1 00:26:21.823 --rc geninfo_unexecuted_blocks=1 00:26:21.823 00:26:21.823 ' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.823 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.824 05:21:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:29.945 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:29.945 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:29.945 Found net devices under 0000:af:00.0: cvl_0_0 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:29.945 Found net devices under 0000:af:00.1: cvl_0_1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:26:29.945 00:26:29.945 --- 10.0.0.2 ping statistics --- 00:26:29.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.945 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:26:29.945 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:26:29.945 00:26:29.946 --- 10.0.0.1 ping statistics --- 00:26:29.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.946 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=602279 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 602279 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 602279 ']' 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.946 05:21:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 [2024-12-09 05:21:11.446855] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:26:29.946 [2024-12-09 05:21:11.446900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.946 [2024-12-09 05:21:11.540983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.946 [2024-12-09 05:21:11.580485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.946 [2024-12-09 05:21:11.580527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.946 [2024-12-09 05:21:11.580536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.946 [2024-12-09 05:21:11.580544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.946 [2024-12-09 05:21:11.580551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.946 [2024-12-09 05:21:11.581152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 [2024-12-09 05:21:12.337792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 [2024-12-09 05:21:12.350003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 null0 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 null1 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=602408 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 602408 /tmp/host.sock 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 602408 ']' 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:29.946 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.946 05:21:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.206 [2024-12-09 05:21:12.430362] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:26:30.206 [2024-12-09 05:21:12.430405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602408 ] 00:26:30.206 [2024-12-09 05:21:12.522307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.206 [2024-12-09 05:21:12.560984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.157 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 [2024-12-09 05:21:13.593201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.158 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.417 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:31.418 05:21:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:31.986 [2024-12-09 05:21:14.328404] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.986 [2024-12-09 05:21:14.328424] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.986 [2024-12-09 05:21:14.328440] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.986 [2024-12-09 05:21:14.416694] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:32.245 [2024-12-09 05:21:14.476390] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:32.245 [2024-12-09 05:21:14.477235] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2285350:1 started. 00:26:32.245 [2024-12-09 05:21:14.478714] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.245 [2024-12-09 05:21:14.478732] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.245 [2024-12-09 05:21:14.486143] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2285350 was disconnected and freed. delete nvme_qpair. 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.505 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.506 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.766 05:21:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.766 [2024-12-09 05:21:15.183956] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2285720:1 started. 00:26:32.766 [2024-12-09 05:21:15.187899] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2285720 was disconnected and freed. delete nvme_qpair. 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.766 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 [2024-12-09 05:21:15.277705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:33.027 [2024-12-09 05:21:15.278264] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:33.027 [2024-12-09 05:21:15.278284] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.027 [2024-12-09 05:21:15.364839] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:33.027 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:33.028 05:21:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:33.288 [2024-12-09 05:21:15.503850] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:33.288 [2024-12-09 05:21:15.503883] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.288 [2024-12-09 05:21:15.503893] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.288 [2024-12-09 05:21:15.503899] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.229 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.229 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:34.229 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.230 [2024-12-09 05:21:16.541664] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:34.230 [2024-12-09 05:21:16.541687] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:34.230 [2024-12-09 05:21:16.548476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.230 [2024-12-09 05:21:16.548498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.230 [2024-12-09 05:21:16.548509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.230 [2024-12-09 05:21:16.548518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.230 [2024-12-09 05:21:16.548528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.230 [2024-12-09 05:21:16.548536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.230 [2024-12-09 05:21:16.548546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.230 [2024-12-09 05:21:16.548554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.230 [2024-12-09 05:21:16.548563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.230 [2024-12-09 05:21:16.558488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.230 [2024-12-09 05:21:16.568523] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.230 [2024-12-09 05:21:16.568536] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.230 [2024-12-09 05:21:16.568543] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.568549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.230 [2024-12-09 05:21:16.568571] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.568816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.230 [2024-12-09 05:21:16.568832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.230 [2024-12-09 05:21:16.568842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.230 [2024-12-09 05:21:16.568855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.230 [2024-12-09 05:21:16.568880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.230 [2024-12-09 05:21:16.568890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.230 [2024-12-09 05:21:16.568900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.230 [2024-12-09 05:21:16.568908] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.230 [2024-12-09 05:21:16.568915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.230 [2024-12-09 05:21:16.568921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.230 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.230 [2024-12-09 05:21:16.578602] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.230 [2024-12-09 05:21:16.578614] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.230 [2024-12-09 05:21:16.578620] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.578626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.230 [2024-12-09 05:21:16.578640] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.578861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.230 [2024-12-09 05:21:16.578875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.230 [2024-12-09 05:21:16.578884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.230 [2024-12-09 05:21:16.578896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.230 [2024-12-09 05:21:16.578915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.230 [2024-12-09 05:21:16.578924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.230 [2024-12-09 05:21:16.578933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.230 [2024-12-09 05:21:16.578940] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.230 [2024-12-09 05:21:16.578946] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.230 [2024-12-09 05:21:16.578952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.230 [2024-12-09 05:21:16.588673] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.230 [2024-12-09 05:21:16.588686] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.230 [2024-12-09 05:21:16.588693] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.588701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.230 [2024-12-09 05:21:16.588717] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.230 [2024-12-09 05:21:16.588986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.589001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.589010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.231 [2024-12-09 05:21:16.589023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.231 [2024-12-09 05:21:16.589048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.231 [2024-12-09 05:21:16.589058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.231 [2024-12-09 05:21:16.589067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.231 [2024-12-09 05:21:16.589074] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.231 [2024-12-09 05:21:16.589080] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.231 [2024-12-09 05:21:16.589086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.231 [2024-12-09 05:21:16.598748] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.231 [2024-12-09 05:21:16.598762] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.231 [2024-12-09 05:21:16.598768] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.598773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.231 [2024-12-09 05:21:16.598788] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.598992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.599005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.599014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.231 [2024-12-09 05:21:16.599026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.231 [2024-12-09 05:21:16.599038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.231 [2024-12-09 05:21:16.599046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.231 [2024-12-09 05:21:16.599055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.231 [2024-12-09 05:21:16.599062] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.231 [2024-12-09 05:21:16.599068] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.231 [2024-12-09 05:21:16.599073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.231 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.231 [2024-12-09 05:21:16.608818] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.231 [2024-12-09 05:21:16.608832] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.231 [2024-12-09 05:21:16.608839] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.608846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.231 [2024-12-09 05:21:16.608860] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.609075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.609084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.231 [2024-12-09 05:21:16.609095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.231 [2024-12-09 05:21:16.609699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.231 [2024-12-09 05:21:16.609711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.231 [2024-12-09 05:21:16.609720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.231 [2024-12-09 05:21:16.609728] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.231 [2024-12-09 05:21:16.609734] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.231 [2024-12-09 05:21:16.609739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.231 [2024-12-09 05:21:16.618892] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.231 [2024-12-09 05:21:16.618904] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.231 [2024-12-09 05:21:16.618910] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.618916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.231 [2024-12-09 05:21:16.618931] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.619176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.619186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.231 [2024-12-09 05:21:16.619198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.231 [2024-12-09 05:21:16.619215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.231 [2024-12-09 05:21:16.619223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.231 [2024-12-09 05:21:16.619232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.231 [2024-12-09 05:21:16.619239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.231 [2024-12-09 05:21:16.619246] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.231 [2024-12-09 05:21:16.619251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.231 [2024-12-09 05:21:16.628963] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.231 [2024-12-09 05:21:16.628975] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.231 [2024-12-09 05:21:16.628981] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.628987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.231 [2024-12-09 05:21:16.629002] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.629249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.629264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.629273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.231 [2024-12-09 05:21:16.629286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.231 [2024-12-09 05:21:16.629305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.231 [2024-12-09 05:21:16.629314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.231 [2024-12-09 05:21:16.629322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.231 [2024-12-09 05:21:16.629330] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.231 [2024-12-09 05:21:16.629336] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.231 [2024-12-09 05:21:16.629342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.231 [2024-12-09 05:21:16.639034] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.231 [2024-12-09 05:21:16.639045] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.231 [2024-12-09 05:21:16.639051] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.639057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.231 [2024-12-09 05:21:16.639075] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.231 [2024-12-09 05:21:16.639321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.231 [2024-12-09 05:21:16.639335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.231 [2024-12-09 05:21:16.639344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.232 [2024-12-09 05:21:16.639358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.232 [2024-12-09 05:21:16.639370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.232 [2024-12-09 05:21:16.639378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.232 [2024-12-09 05:21:16.639387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.232 [2024-12-09 05:21:16.639394] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.232 [2024-12-09 05:21:16.639400] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.232 [2024-12-09 05:21:16.639406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.232 [2024-12-09 05:21:16.649107] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.232 [2024-12-09 05:21:16.649119] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.232 [2024-12-09 05:21:16.649125] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.232 [2024-12-09 05:21:16.649131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.232 [2024-12-09 05:21:16.649146] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.232 [2024-12-09 05:21:16.649389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.232 [2024-12-09 05:21:16.649403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.232 [2024-12-09 05:21:16.649412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.232 [2024-12-09 05:21:16.649424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.232 [2024-12-09 05:21:16.649442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.232 [2024-12-09 05:21:16.649451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.232 [2024-12-09 05:21:16.649460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.232 [2024-12-09 05:21:16.649467] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.232 [2024-12-09 05:21:16.649473] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.232 [2024-12-09 05:21:16.649479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.232 [2024-12-09 05:21:16.659177] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.232 [2024-12-09 05:21:16.659193] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.232 [2024-12-09 05:21:16.659199] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.232 [2024-12-09 05:21:16.659205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.232 [2024-12-09 05:21:16.659224] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.232 [2024-12-09 05:21:16.659410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.232 [2024-12-09 05:21:16.659425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2255910 with addr=10.0.0.2, port=4420 00:26:34.232 [2024-12-09 05:21:16.659434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2255910 is same with the state(6) to be set 00:26:34.232 [2024-12-09 05:21:16.659446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2255910 (9): Bad file descriptor 00:26:34.232 [2024-12-09 05:21:16.659458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.232 [2024-12-09 05:21:16.659466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.232 [2024-12-09 05:21:16.659475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.232 [2024-12-09 05:21:16.659482] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.232 [2024-12-09 05:21:16.659488] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.232 [2024-12-09 05:21:16.659494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.232 [2024-12-09 05:21:16.667897] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:34.232 [2024-12-09 05:21:16.667915] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.232 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.492 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.493 05:21:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.873 [2024-12-09 05:21:17.977769] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.873 [2024-12-09 05:21:17.977795] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.873 [2024-12-09 05:21:17.977811] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.873 [2024-12-09 05:21:18.104181] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:36.133 [2024-12-09 05:21:18.374494] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:36.133 [2024-12-09 05:21:18.375090] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2254d40:1 started. 00:26:36.133 [2024-12-09 05:21:18.376709] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:36.133 [2024-12-09 05:21:18.376734] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.133 [2024-12-09 05:21:18.386194] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2254d40 was disconnected and freed. delete nvme_qpair. 00:26:36.133 request: 00:26:36.133 { 00:26:36.133 "name": "nvme", 00:26:36.133 "trtype": "tcp", 00:26:36.133 "traddr": "10.0.0.2", 00:26:36.133 "adrfam": "ipv4", 00:26:36.133 "trsvcid": "8009", 00:26:36.133 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.133 "wait_for_attach": true, 00:26:36.133 "method": "bdev_nvme_start_discovery", 00:26:36.133 "req_id": 1 00:26:36.133 } 00:26:36.133 Got JSON-RPC error response 00:26:36.133 response: 00:26:36.133 { 00:26:36.133 "code": -17, 00:26:36.133 "message": "File exists" 00:26:36.133 } 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.133 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 request: 00:26:36.134 { 00:26:36.134 "name": "nvme_second", 00:26:36.134 "trtype": "tcp", 00:26:36.134 "traddr": "10.0.0.2", 00:26:36.134 "adrfam": "ipv4", 00:26:36.134 "trsvcid": "8009", 00:26:36.134 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.134 "wait_for_attach": true, 00:26:36.134 "method": "bdev_nvme_start_discovery", 00:26:36.134 "req_id": 1 00:26:36.134 } 00:26:36.134 Got JSON-RPC error response 00:26:36.134 response: 00:26:36.134 { 00:26:36.134 "code": -17, 00:26:36.134 "message": "File exists" 00:26:36.134 } 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.134 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.393 05:21:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.331 [2024-12-09 05:21:19.633267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.331 [2024-12-09 05:21:19.633294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22911b0 with addr=10.0.0.2, port=8010 00:26:37.331 [2024-12-09 05:21:19.633309] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.331 [2024-12-09 05:21:19.633317] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.331 [2024-12-09 05:21:19.633325] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:38.271 [2024-12-09 05:21:20.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.271 [2024-12-09 05:21:20.635831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22911b0 with addr=10.0.0.2, port=8010 00:26:38.271 [2024-12-09 05:21:20.635850] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:38.271 [2024-12-09 05:21:20.635859] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:38.271 [2024-12-09 05:21:20.635867] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:39.208 [2024-12-09 05:21:21.637928] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:39.208 request: 00:26:39.208 { 00:26:39.208 "name": "nvme_second", 00:26:39.208 "trtype": "tcp", 00:26:39.208 "traddr": "10.0.0.2", 00:26:39.208 "adrfam": "ipv4", 00:26:39.208 "trsvcid": "8010", 00:26:39.208 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:39.208 "wait_for_attach": false, 00:26:39.208 "attach_timeout_ms": 3000, 00:26:39.208 "method": "bdev_nvme_start_discovery", 00:26:39.208 "req_id": 1 00:26:39.208 } 00:26:39.208 Got JSON-RPC error response 00:26:39.208 response: 00:26:39.208 { 00:26:39.208 "code": -110, 00:26:39.208 "message": "Connection timed out" 00:26:39.208 } 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:39.208 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 602408 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.467 rmmod nvme_tcp 00:26:39.467 rmmod nvme_fabrics 00:26:39.467 rmmod nvme_keyring 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 602279 ']' 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 602279 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 602279 ']' 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 602279 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602279 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602279' 00:26:39.467 killing process with pid 602279 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 602279 00:26:39.467 05:21:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 602279 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.726 05:21:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.268 00:26:42.268 real 0m20.233s 00:26:42.268 user 0m23.414s 00:26:42.268 sys 0m7.611s 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.268 ************************************ 00:26:42.268 END TEST nvmf_host_discovery 00:26:42.268 ************************************ 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.268 ************************************ 00:26:42.268 START TEST nvmf_host_multipath_status 00:26:42.268 ************************************ 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:42.268 * Looking for test storage... 00:26:42.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:42.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.268 --rc genhtml_branch_coverage=1 00:26:42.268 --rc genhtml_function_coverage=1 00:26:42.268 --rc genhtml_legend=1 00:26:42.268 --rc geninfo_all_blocks=1 00:26:42.268 --rc geninfo_unexecuted_blocks=1 00:26:42.268 00:26:42.268 ' 00:26:42.268 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:42.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.269 --rc genhtml_branch_coverage=1 00:26:42.269 --rc genhtml_function_coverage=1 00:26:42.269 --rc genhtml_legend=1 00:26:42.269 --rc geninfo_all_blocks=1 00:26:42.269 --rc geninfo_unexecuted_blocks=1 00:26:42.269 00:26:42.269 ' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:42.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.269 --rc genhtml_branch_coverage=1 00:26:42.269 --rc genhtml_function_coverage=1 00:26:42.269 --rc genhtml_legend=1 00:26:42.269 --rc geninfo_all_blocks=1 00:26:42.269 --rc geninfo_unexecuted_blocks=1 00:26:42.269 00:26:42.269 ' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:42.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.269 --rc genhtml_branch_coverage=1 00:26:42.269 --rc genhtml_function_coverage=1 00:26:42.269 --rc genhtml_legend=1 00:26:42.269 --rc geninfo_all_blocks=1 00:26:42.269 --rc geninfo_unexecuted_blocks=1 00:26:42.269 00:26:42.269 ' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.269 05:21:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:50.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:50.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:50.428 Found net devices under 0000:af:00.0: cvl_0_0 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:50.428 Found net devices under 0000:af:00.1: cvl_0_1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:26:50.428 00:26:50.428 --- 10.0.0.2 ping statistics --- 00:26:50.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.428 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:50.428 00:26:50.428 --- 10.0.0.1 ping statistics --- 00:26:50.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.428 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:50.428 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=607865 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 607865 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 607865 ']' 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.429 05:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.429 [2024-12-09 05:21:31.784918] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:26:50.429 [2024-12-09 05:21:31.784974] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.429 [2024-12-09 05:21:31.881590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.429 [2024-12-09 05:21:31.921867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.429 [2024-12-09 05:21:31.921906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.429 [2024-12-09 05:21:31.921915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.429 [2024-12-09 05:21:31.921923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.429 [2024-12-09 05:21:31.921930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.429 [2024-12-09 05:21:31.923325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.429 [2024-12-09 05:21:31.923325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=607865 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:50.429 [2024-12-09 05:21:32.827041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.429 05:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:50.687 Malloc0 00:26:50.687 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:50.945 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.204 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.204 [2024-12-09 05:21:33.651163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.463 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.463 [2024-12-09 05:21:33.847695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.463 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=608328 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 608328 /var/tmp/bdevperf.sock 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 608328 ']' 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:51.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.464 05:21:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:52.403 05:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.403 05:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:52.403 05:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:52.663 05:21:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:52.924 Nvme0n1 00:26:52.924 05:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:53.497 Nvme0n1 00:26:53.497 05:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:53.497 05:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:55.409 05:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:55.409 05:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:55.670 05:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:55.930 05:21:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:56.870 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:56.870 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:56.870 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.870 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.131 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.131 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:57.131 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.131 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.390 05:21:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.650 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.650 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:57.650 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.650 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.910 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.910 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:57.910 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.910 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.170 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.170 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:58.170 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:58.429 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:58.429 05:21:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:59.419 05:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:59.419 05:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:59.419 05:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.419 05:21:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.678 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.678 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:59.678 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.678 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.936 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.936 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.936 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.936 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.195 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.195 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.195 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.195 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.455 05:21:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.714 05:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.714 05:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:00.714 05:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:00.974 05:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:01.234 05:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:02.173 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:02.173 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:02.173 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.173 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.433 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.433 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:02.433 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.433 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.693 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.693 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.693 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.693 05:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.951 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.951 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.952 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.211 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.211 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.211 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.211 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.470 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.470 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:03.470 05:21:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:03.729 05:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:03.988 05:21:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:04.926 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:04.926 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:04.926 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.926 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.186 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.444 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.444 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.444 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.445 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.445 05:21:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.703 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.703 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:05.703 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.703 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.962 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.962 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:05.962 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.962 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.221 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.221 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:06.221 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:06.480 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:06.480 05:21:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:07.859 05:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:07.859 05:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:07.859 05:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.859 05:21:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.859 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.118 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.118 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.118 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.118 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.377 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.377 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:08.377 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.377 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.637 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.637 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:08.637 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.637 05:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.637 05:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.637 05:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:08.637 05:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:08.896 05:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:09.155 05:21:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:10.094 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:10.094 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:10.094 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.094 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:10.354 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:10.354 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:10.354 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.354 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.614 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.614 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.614 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.614 05:21:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.614 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.614 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.614 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.614 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.874 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.874 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:10.874 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.874 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:11.134 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.134 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:11.134 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.134 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:11.394 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.394 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:11.653 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:11.653 05:21:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:11.653 05:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:11.913 05:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:12.852 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:12.852 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.852 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.852 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:13.111 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.111 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:13.111 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.111 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.370 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.370 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.370 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.370 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.630 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.630 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.630 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.630 05:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.630 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.630 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.889 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.149 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.149 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:14.149 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.408 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:14.666 05:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:15.603 05:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:15.603 05:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.603 05:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.604 05:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.864 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.864 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:15.864 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.864 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.123 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.383 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.383 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:16.383 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.383 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.643 05:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.643 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.643 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.643 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.903 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.903 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:16.903 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:17.162 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:17.421 05:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:18.359 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:18.359 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:18.359 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.359 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:18.630 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.630 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:18.630 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.630 05:22:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.630 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.630 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.630 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.630 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.941 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.941 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.941 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.941 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.221 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.221 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:19.221 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.221 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:19.564 05:22:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.870 05:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:20.136 05:22:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:21.074 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:21.074 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:21.074 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.074 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:21.333 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.333 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.334 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.593 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.593 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.593 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.593 05:22:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.852 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.852 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.852 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.852 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:22.111 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.111 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 608328 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 608328 ']' 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 608328 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.112 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608328 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608328' 00:27:22.374 killing process with pid 608328 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 608328 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 608328 00:27:22.374 { 00:27:22.374 "results": [ 00:27:22.374 { 00:27:22.374 "job": "Nvme0n1", 00:27:22.374 "core_mask": "0x4", 00:27:22.374 "workload": "verify", 00:27:22.374 "status": "terminated", 00:27:22.374 "verify_range": { 00:27:22.374 "start": 0, 00:27:22.374 "length": 16384 00:27:22.374 }, 00:27:22.374 "queue_depth": 128, 00:27:22.374 "io_size": 4096, 00:27:22.374 "runtime": 28.692837, 00:27:22.374 "iops": 10823.223928675996, 00:27:22.374 "mibps": 42.27821847139061, 00:27:22.374 "io_failed": 0, 00:27:22.374 "io_timeout": 0, 00:27:22.374 "avg_latency_us": 11806.609428087677, 00:27:22.374 "min_latency_us": 353.8944, 00:27:22.374 "max_latency_us": 3019898.88 00:27:22.374 } 00:27:22.374 ], 00:27:22.374 "core_count": 1 00:27:22.374 } 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 608328 00:27:22.374 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.375 [2024-12-09 05:21:33.929155] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:27:22.375 [2024-12-09 05:21:33.929219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608328 ] 00:27:22.375 [2024-12-09 05:21:34.023559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.375 [2024-12-09 05:21:34.063006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.375 Running I/O for 90 seconds... 00:27:22.375 11624.00 IOPS, 45.41 MiB/s [2024-12-09T04:22:04.845Z] 11747.50 IOPS, 45.89 MiB/s [2024-12-09T04:22:04.845Z] 11716.00 IOPS, 45.77 MiB/s [2024-12-09T04:22:04.845Z] 11780.75 IOPS, 46.02 MiB/s [2024-12-09T04:22:04.845Z] 11762.40 IOPS, 45.95 MiB/s [2024-12-09T04:22:04.845Z] 11735.50 IOPS, 45.84 MiB/s [2024-12-09T04:22:04.845Z] 11724.00 IOPS, 45.80 MiB/s [2024-12-09T04:22:04.845Z] 11723.38 IOPS, 45.79 MiB/s [2024-12-09T04:22:04.845Z] 11721.22 IOPS, 45.79 MiB/s [2024-12-09T04:22:04.845Z] 11718.10 IOPS, 45.77 MiB/s [2024-12-09T04:22:04.845Z] 11722.09 IOPS, 45.79 MiB/s [2024-12-09T04:22:04.845Z] 11722.75 IOPS, 45.79 MiB/s [2024-12-09T04:22:04.845Z] [2024-12-09 05:21:48.663172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.663448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.664981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.664998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.375 [2024-12-09 05:21:48.665186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:22.375 [2024-12-09 05:21:48.665202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-12-09 05:21:48.665834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.665999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:22.376 [2024-12-09 05:21:48.666387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.376 [2024-12-09 05:21:48.666397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:21:48.666873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.666902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.666960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.666981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.666990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.667011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.667020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.667040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.667049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:21:48.667069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:21:48.667078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.377 11506.15 IOPS, 44.95 MiB/s [2024-12-09T04:22:04.847Z] 10684.29 IOPS, 41.74 MiB/s [2024-12-09T04:22:04.847Z] 9972.00 IOPS, 38.95 MiB/s [2024-12-09T04:22:04.847Z] 9522.62 IOPS, 37.20 MiB/s [2024-12-09T04:22:04.847Z] 9658.18 IOPS, 37.73 MiB/s [2024-12-09T04:22:04.847Z] 9779.06 IOPS, 38.20 MiB/s [2024-12-09T04:22:04.847Z] 9960.68 IOPS, 38.91 MiB/s [2024-12-09T04:22:04.847Z] 10155.65 IOPS, 39.67 MiB/s [2024-12-09T04:22:04.847Z] 10299.81 IOPS, 40.23 MiB/s [2024-12-09T04:22:04.847Z] 10356.64 IOPS, 40.46 MiB/s [2024-12-09T04:22:04.847Z] 10406.57 IOPS, 40.65 MiB/s [2024-12-09T04:22:04.847Z] 10475.46 IOPS, 40.92 MiB/s [2024-12-09T04:22:04.847Z] 10592.52 IOPS, 41.38 MiB/s [2024-12-09T04:22:04.847Z] 10701.42 IOPS, 41.80 MiB/s [2024-12-09T04:22:04.847Z] [2024-12-09 05:22:02.308821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.308860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.308923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.308943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.308953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.308967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.308977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.308991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.377 [2024-12-09 05:22:02.309414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.377 [2024-12-09 05:22:02.309429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-12-09 05:22:02.309438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.309886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.309981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.309995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.310004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.378 [2024-12-09 05:22:02.310433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:22.378 [2024-12-09 05:22:02.310987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.378 [2024-12-09 05:22:02.310996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.379 [2024-12-09 05:22:02.311213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.379 [2024-12-09 05:22:02.311237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:22.379 [2024-12-09 05:22:02.311251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.379 [2024-12-09 05:22:02.311260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:22.379 10769.19 IOPS, 42.07 MiB/s [2024-12-09T04:22:04.849Z] 10801.93 IOPS, 42.20 MiB/s [2024-12-09T04:22:04.849Z] Received shutdown signal, test time was about 28.693481 seconds 00:27:22.379 00:27:22.379 Latency(us) 00:27:22.379 [2024-12-09T04:22:04.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.379 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:22.379 Verification LBA range: start 0x0 length 0x4000 00:27:22.379 Nvme0n1 : 28.69 10823.22 42.28 0.00 0.00 11806.61 353.89 3019898.88 00:27:22.379 [2024-12-09T04:22:04.849Z] =================================================================================================================== 00:27:22.379 [2024-12-09T04:22:04.849Z] Total : 10823.22 42.28 0.00 0.00 11806.61 353.89 3019898.88 00:27:22.379 05:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.639 rmmod nvme_tcp 00:27:22.639 rmmod nvme_fabrics 00:27:22.639 rmmod nvme_keyring 00:27:22.639 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 607865 ']' 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 607865 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 607865 ']' 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 607865 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607865 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607865' 00:27:22.898 killing process with pid 607865 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 607865 00:27:22.898 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 607865 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.165 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.166 05:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.071 00:27:25.071 real 0m43.281s 00:27:25.071 user 1m51.222s 00:27:25.071 sys 0m15.076s 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:25.071 ************************************ 00:27:25.071 END TEST nvmf_host_multipath_status 00:27:25.071 ************************************ 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.071 05:22:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.331 ************************************ 00:27:25.331 START TEST nvmf_discovery_remove_ifc 00:27:25.331 ************************************ 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:25.331 * Looking for test storage... 00:27:25.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.331 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.332 --rc genhtml_branch_coverage=1 00:27:25.332 --rc genhtml_function_coverage=1 00:27:25.332 --rc genhtml_legend=1 00:27:25.332 --rc geninfo_all_blocks=1 00:27:25.332 --rc geninfo_unexecuted_blocks=1 00:27:25.332 00:27:25.332 ' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.332 --rc genhtml_branch_coverage=1 00:27:25.332 --rc genhtml_function_coverage=1 00:27:25.332 --rc genhtml_legend=1 00:27:25.332 --rc geninfo_all_blocks=1 00:27:25.332 --rc geninfo_unexecuted_blocks=1 00:27:25.332 00:27:25.332 ' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.332 --rc genhtml_branch_coverage=1 00:27:25.332 --rc genhtml_function_coverage=1 00:27:25.332 --rc genhtml_legend=1 00:27:25.332 --rc geninfo_all_blocks=1 00:27:25.332 --rc geninfo_unexecuted_blocks=1 00:27:25.332 00:27:25.332 ' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.332 --rc genhtml_branch_coverage=1 00:27:25.332 --rc genhtml_function_coverage=1 00:27:25.332 --rc genhtml_legend=1 00:27:25.332 --rc geninfo_all_blocks=1 00:27:25.332 --rc geninfo_unexecuted_blocks=1 00:27:25.332 00:27:25.332 ' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.332 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.592 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.593 05:22:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.715 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:33.716 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:33.716 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:33.716 Found net devices under 0000:af:00.0: cvl_0_0 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:33.716 Found net devices under 0000:af:00.1: cvl_0_1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.716 05:22:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:27:33.716 00:27:33.716 --- 10.0.0.2 ping statistics --- 00:27:33.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.716 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:27:33.716 00:27:33.716 --- 10.0.0.1 ping statistics --- 00:27:33.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.716 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=617390 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 617390 00:27:33.716 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 617390 ']' 00:27:33.717 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.717 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.717 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.717 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.717 05:22:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.717 [2024-12-09 05:22:15.207925] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:27:33.717 [2024-12-09 05:22:15.207977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.717 [2024-12-09 05:22:15.306187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.717 [2024-12-09 05:22:15.345760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.717 [2024-12-09 05:22:15.345797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.717 [2024-12-09 05:22:15.345807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.717 [2024-12-09 05:22:15.345815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.717 [2024-12-09 05:22:15.345822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.717 [2024-12-09 05:22:15.346409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.717 [2024-12-09 05:22:16.100511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.717 [2024-12-09 05:22:16.108716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:33.717 null0 00:27:33.717 [2024-12-09 05:22:16.140685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=617569 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 617569 /tmp/host.sock 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 617569 ']' 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:33.717 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.717 05:22:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.976 [2024-12-09 05:22:16.213929] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:27:33.976 [2024-12-09 05:22:16.213974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617569 ] 00:27:33.976 [2024-12-09 05:22:16.305468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.976 [2024-12-09 05:22:16.349483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.915 05:22:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.852 [2024-12-09 05:22:18.148708] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:35.852 [2024-12-09 05:22:18.148729] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:35.852 [2024-12-09 05:22:18.148743] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:35.852 [2024-12-09 05:22:18.275121] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:36.111 [2024-12-09 05:22:18.450193] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:36.111 [2024-12-09 05:22:18.450886] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2154390:1 started. 00:27:36.111 [2024-12-09 05:22:18.452324] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:36.111 [2024-12-09 05:22:18.452368] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:36.111 [2024-12-09 05:22:18.452389] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:36.111 [2024-12-09 05:22:18.452407] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:36.111 [2024-12-09 05:22:18.452427] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:36.111 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.112 [2024-12-09 05:22:18.457974] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2154390 was disconnected and freed. delete nvme_qpair. 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:36.112 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:36.370 05:22:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:37.304 05:22:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:38.688 05:22:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:39.623 05:22:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:40.558 05:22:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.497 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.497 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.497 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.498 [2024-12-09 05:22:23.893879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:41.498 [2024-12-09 05:22:23.893928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.498 [2024-12-09 05:22:23.893942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.498 [2024-12-09 05:22:23.893954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.498 [2024-12-09 05:22:23.893963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.498 [2024-12-09 05:22:23.893972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.498 [2024-12-09 05:22:23.893982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.498 [2024-12-09 05:22:23.893992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.498 [2024-12-09 05:22:23.894001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.498 [2024-12-09 05:22:23.894011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.498 [2024-12-09 05:22:23.894020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.498 [2024-12-09 05:22:23.894030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130b80 is same with the state(6) to be set 00:27:41.498 [2024-12-09 05:22:23.903901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2130b80 (9): Bad file descriptor 00:27:41.498 [2024-12-09 05:22:23.913935] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:41.498 [2024-12-09 05:22:23.913948] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:41.498 [2024-12-09 05:22:23.913954] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:41.498 [2024-12-09 05:22:23.913960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:41.498 [2024-12-09 05:22:23.913988] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:41.498 05:22:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.877 [2024-12-09 05:22:24.944264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:42.877 [2024-12-09 05:22:24.944358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2130b80 with addr=10.0.0.2, port=4420 00:27:42.877 [2024-12-09 05:22:24.944403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130b80 is same with the state(6) to be set 00:27:42.877 [2024-12-09 05:22:24.944474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2130b80 (9): Bad file descriptor 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.877 [2024-12-09 05:22:24.944605] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:42.877 [2024-12-09 05:22:24.944672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:42.877 [2024-12-09 05:22:24.944704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:42.877 [2024-12-09 05:22:24.944736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:42.877 [2024-12-09 05:22:24.944762] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:42.877 [2024-12-09 05:22:24.944786] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:42.877 [2024-12-09 05:22:24.944807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:42.877 [2024-12-09 05:22:24.944839] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:42.877 [2024-12-09 05:22:24.944861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.877 05:22:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.815 [2024-12-09 05:22:25.947362] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:43.815 [2024-12-09 05:22:25.947385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:43.815 [2024-12-09 05:22:25.947399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:43.815 [2024-12-09 05:22:25.947408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:43.815 [2024-12-09 05:22:25.947418] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:43.815 [2024-12-09 05:22:25.947444] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:43.815 [2024-12-09 05:22:25.947451] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:43.815 [2024-12-09 05:22:25.947457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:43.815 [2024-12-09 05:22:25.947482] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:43.815 [2024-12-09 05:22:25.947506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.815 [2024-12-09 05:22:25.947520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.815 [2024-12-09 05:22:25.947533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.815 [2024-12-09 05:22:25.947542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.815 [2024-12-09 05:22:25.947556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.815 [2024-12-09 05:22:25.947565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.815 [2024-12-09 05:22:25.947574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.815 [2024-12-09 05:22:25.947583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.815 [2024-12-09 05:22:25.947593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.815 [2024-12-09 05:22:25.947603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.815 [2024-12-09 05:22:25.947612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:43.815 [2024-12-09 05:22:25.947642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211fe60 (9): Bad file descriptor 00:27:43.815 [2024-12-09 05:22:25.948637] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:43.815 [2024-12-09 05:22:25.948649] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.815 05:22:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:43.815 05:22:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.752 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.010 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:45.010 05:22:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:45.578 [2024-12-09 05:22:27.963738] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:45.578 [2024-12-09 05:22:27.963756] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:45.578 [2024-12-09 05:22:27.963768] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:45.836 [2024-12-09 05:22:28.090164] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.836 [2024-12-09 05:22:28.266181] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:45.836 [2024-12-09 05:22:28.266825] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x215db00:1 started. 00:27:45.836 [2024-12-09 05:22:28.267883] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:45.836 [2024-12-09 05:22:28.267919] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:45.836 [2024-12-09 05:22:28.267938] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:45.836 [2024-12-09 05:22:28.267954] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:45.836 [2024-12-09 05:22:28.267963] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:45.836 05:22:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.095 [2024-12-09 05:22:28.313784] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x215db00 was disconnected and freed. delete nvme_qpair. 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 617569 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 617569 ']' 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 617569 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617569 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617569' 00:27:47.069 killing process with pid 617569 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 617569 00:27:47.069 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 617569 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.345 rmmod nvme_tcp 00:27:47.345 rmmod nvme_fabrics 00:27:47.345 rmmod nvme_keyring 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 617390 ']' 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 617390 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 617390 ']' 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 617390 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 617390 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 617390' 00:27:47.345 killing process with pid 617390 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 617390 00:27:47.345 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 617390 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.604 05:22:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.139 00:27:50.139 real 0m24.458s 00:27:50.139 user 0m29.139s 00:27:50.139 sys 0m7.874s 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.139 ************************************ 00:27:50.139 END TEST nvmf_discovery_remove_ifc 00:27:50.139 ************************************ 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.139 ************************************ 00:27:50.139 START TEST nvmf_identify_kernel_target 00:27:50.139 ************************************ 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:50.139 * Looking for test storage... 00:27:50.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.139 --rc genhtml_branch_coverage=1 00:27:50.139 --rc genhtml_function_coverage=1 00:27:50.139 --rc genhtml_legend=1 00:27:50.139 --rc geninfo_all_blocks=1 00:27:50.139 --rc geninfo_unexecuted_blocks=1 00:27:50.139 00:27:50.139 ' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.139 --rc genhtml_branch_coverage=1 00:27:50.139 --rc genhtml_function_coverage=1 00:27:50.139 --rc genhtml_legend=1 00:27:50.139 --rc geninfo_all_blocks=1 00:27:50.139 --rc geninfo_unexecuted_blocks=1 00:27:50.139 00:27:50.139 ' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.139 --rc genhtml_branch_coverage=1 00:27:50.139 --rc genhtml_function_coverage=1 00:27:50.139 --rc genhtml_legend=1 00:27:50.139 --rc geninfo_all_blocks=1 00:27:50.139 --rc geninfo_unexecuted_blocks=1 00:27:50.139 00:27:50.139 ' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.139 --rc genhtml_branch_coverage=1 00:27:50.139 --rc genhtml_function_coverage=1 00:27:50.139 --rc genhtml_legend=1 00:27:50.139 --rc geninfo_all_blocks=1 00:27:50.139 --rc geninfo_unexecuted_blocks=1 00:27:50.139 00:27:50.139 ' 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.139 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.140 05:22:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:58.274 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.274 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:58.275 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:58.275 Found net devices under 0000:af:00.0: cvl_0_0 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:58.275 Found net devices under 0000:af:00.1: cvl_0_1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:27:58.275 00:27:58.275 --- 10.0.0.2 ping statistics --- 00:27:58.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.275 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:27:58.275 00:27:58.275 --- 10.0.0.1 ping statistics --- 00:27:58.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.275 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:58.275 05:22:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:00.816 Waiting for block devices as requested 00:28:00.816 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:00.816 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:00.816 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:00.816 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:01.075 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:01.075 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:01.075 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:01.334 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:01.334 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:01.334 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:01.592 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:01.592 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:01.592 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:01.851 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:01.851 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:01.851 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:02.110 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:02.370 No valid GPT data, bailing 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:02.370 00:28:02.370 Discovery Log Number of Records 2, Generation counter 2 00:28:02.370 =====Discovery Log Entry 0====== 00:28:02.370 trtype: tcp 00:28:02.370 adrfam: ipv4 00:28:02.370 subtype: current discovery subsystem 00:28:02.370 treq: not specified, sq flow control disable supported 00:28:02.370 portid: 1 00:28:02.370 trsvcid: 4420 00:28:02.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:02.370 traddr: 10.0.0.1 00:28:02.370 eflags: none 00:28:02.370 sectype: none 00:28:02.370 =====Discovery Log Entry 1====== 00:28:02.370 trtype: tcp 00:28:02.370 adrfam: ipv4 00:28:02.370 subtype: nvme subsystem 00:28:02.370 treq: not specified, sq flow control disable supported 00:28:02.370 portid: 1 00:28:02.370 trsvcid: 4420 00:28:02.370 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:02.370 traddr: 10.0.0.1 00:28:02.370 eflags: none 00:28:02.370 sectype: none 00:28:02.370 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:02.370 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:02.630 ===================================================== 00:28:02.630 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:02.630 ===================================================== 00:28:02.630 Controller Capabilities/Features 00:28:02.630 ================================ 00:28:02.630 Vendor ID: 0000 00:28:02.630 Subsystem Vendor ID: 0000 00:28:02.630 Serial Number: bf46c935e138047a1964 00:28:02.630 Model Number: Linux 00:28:02.630 Firmware Version: 6.8.9-20 00:28:02.630 Recommended Arb Burst: 0 00:28:02.630 IEEE OUI Identifier: 00 00 00 00:28:02.630 Multi-path I/O 00:28:02.630 May have multiple subsystem ports: No 00:28:02.630 May have multiple controllers: No 00:28:02.630 Associated with SR-IOV VF: No 00:28:02.630 Max Data Transfer Size: Unlimited 00:28:02.630 Max Number of Namespaces: 0 00:28:02.630 Max Number of I/O Queues: 1024 00:28:02.630 NVMe Specification Version (VS): 1.3 00:28:02.630 NVMe Specification Version (Identify): 1.3 00:28:02.630 Maximum Queue Entries: 1024 00:28:02.630 Contiguous Queues Required: No 00:28:02.630 Arbitration Mechanisms Supported 00:28:02.630 Weighted Round Robin: Not Supported 00:28:02.630 Vendor Specific: Not Supported 00:28:02.630 Reset Timeout: 7500 ms 00:28:02.630 Doorbell Stride: 4 bytes 00:28:02.630 NVM Subsystem Reset: Not Supported 00:28:02.630 Command Sets Supported 00:28:02.630 NVM Command Set: Supported 00:28:02.630 Boot Partition: Not Supported 00:28:02.630 Memory Page Size Minimum: 4096 bytes 00:28:02.630 Memory Page Size Maximum: 4096 bytes 00:28:02.630 Persistent Memory Region: Not Supported 00:28:02.630 Optional Asynchronous Events Supported 00:28:02.630 Namespace Attribute Notices: Not Supported 00:28:02.630 Firmware Activation Notices: Not Supported 00:28:02.630 ANA Change Notices: Not Supported 00:28:02.630 PLE Aggregate Log Change Notices: Not Supported 00:28:02.630 LBA Status Info Alert Notices: Not Supported 00:28:02.630 EGE Aggregate Log Change Notices: Not Supported 00:28:02.630 Normal NVM Subsystem Shutdown event: Not Supported 00:28:02.630 Zone Descriptor Change Notices: Not Supported 00:28:02.630 Discovery Log Change Notices: Supported 00:28:02.630 Controller Attributes 00:28:02.630 128-bit Host Identifier: Not Supported 00:28:02.630 Non-Operational Permissive Mode: Not Supported 00:28:02.630 NVM Sets: Not Supported 00:28:02.630 Read Recovery Levels: Not Supported 00:28:02.630 Endurance Groups: Not Supported 00:28:02.630 Predictable Latency Mode: Not Supported 00:28:02.630 Traffic Based Keep ALive: Not Supported 00:28:02.630 Namespace Granularity: Not Supported 00:28:02.630 SQ Associations: Not Supported 00:28:02.630 UUID List: Not Supported 00:28:02.630 Multi-Domain Subsystem: Not Supported 00:28:02.630 Fixed Capacity Management: Not Supported 00:28:02.630 Variable Capacity Management: Not Supported 00:28:02.630 Delete Endurance Group: Not Supported 00:28:02.630 Delete NVM Set: Not Supported 00:28:02.630 Extended LBA Formats Supported: Not Supported 00:28:02.630 Flexible Data Placement Supported: Not Supported 00:28:02.630 00:28:02.630 Controller Memory Buffer Support 00:28:02.630 ================================ 00:28:02.631 Supported: No 00:28:02.631 00:28:02.631 Persistent Memory Region Support 00:28:02.631 ================================ 00:28:02.631 Supported: No 00:28:02.631 00:28:02.631 Admin Command Set Attributes 00:28:02.631 ============================ 00:28:02.631 Security Send/Receive: Not Supported 00:28:02.631 Format NVM: Not Supported 00:28:02.631 Firmware Activate/Download: Not Supported 00:28:02.631 Namespace Management: Not Supported 00:28:02.631 Device Self-Test: Not Supported 00:28:02.631 Directives: Not Supported 00:28:02.631 NVMe-MI: Not Supported 00:28:02.631 Virtualization Management: Not Supported 00:28:02.631 Doorbell Buffer Config: Not Supported 00:28:02.631 Get LBA Status Capability: Not Supported 00:28:02.631 Command & Feature Lockdown Capability: Not Supported 00:28:02.631 Abort Command Limit: 1 00:28:02.631 Async Event Request Limit: 1 00:28:02.631 Number of Firmware Slots: N/A 00:28:02.631 Firmware Slot 1 Read-Only: N/A 00:28:02.631 Firmware Activation Without Reset: N/A 00:28:02.631 Multiple Update Detection Support: N/A 00:28:02.631 Firmware Update Granularity: No Information Provided 00:28:02.631 Per-Namespace SMART Log: No 00:28:02.631 Asymmetric Namespace Access Log Page: Not Supported 00:28:02.631 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:02.631 Command Effects Log Page: Not Supported 00:28:02.631 Get Log Page Extended Data: Supported 00:28:02.631 Telemetry Log Pages: Not Supported 00:28:02.631 Persistent Event Log Pages: Not Supported 00:28:02.631 Supported Log Pages Log Page: May Support 00:28:02.631 Commands Supported & Effects Log Page: Not Supported 00:28:02.631 Feature Identifiers & Effects Log Page:May Support 00:28:02.631 NVMe-MI Commands & Effects Log Page: May Support 00:28:02.631 Data Area 4 for Telemetry Log: Not Supported 00:28:02.631 Error Log Page Entries Supported: 1 00:28:02.631 Keep Alive: Not Supported 00:28:02.631 00:28:02.631 NVM Command Set Attributes 00:28:02.631 ========================== 00:28:02.631 Submission Queue Entry Size 00:28:02.631 Max: 1 00:28:02.631 Min: 1 00:28:02.631 Completion Queue Entry Size 00:28:02.631 Max: 1 00:28:02.631 Min: 1 00:28:02.631 Number of Namespaces: 0 00:28:02.631 Compare Command: Not Supported 00:28:02.631 Write Uncorrectable Command: Not Supported 00:28:02.631 Dataset Management Command: Not Supported 00:28:02.631 Write Zeroes Command: Not Supported 00:28:02.631 Set Features Save Field: Not Supported 00:28:02.631 Reservations: Not Supported 00:28:02.631 Timestamp: Not Supported 00:28:02.631 Copy: Not Supported 00:28:02.631 Volatile Write Cache: Not Present 00:28:02.631 Atomic Write Unit (Normal): 1 00:28:02.631 Atomic Write Unit (PFail): 1 00:28:02.631 Atomic Compare & Write Unit: 1 00:28:02.631 Fused Compare & Write: Not Supported 00:28:02.631 Scatter-Gather List 00:28:02.631 SGL Command Set: Supported 00:28:02.631 SGL Keyed: Not Supported 00:28:02.631 SGL Bit Bucket Descriptor: Not Supported 00:28:02.631 SGL Metadata Pointer: Not Supported 00:28:02.631 Oversized SGL: Not Supported 00:28:02.631 SGL Metadata Address: Not Supported 00:28:02.631 SGL Offset: Supported 00:28:02.631 Transport SGL Data Block: Not Supported 00:28:02.631 Replay Protected Memory Block: Not Supported 00:28:02.631 00:28:02.631 Firmware Slot Information 00:28:02.631 ========================= 00:28:02.631 Active slot: 0 00:28:02.631 00:28:02.631 00:28:02.631 Error Log 00:28:02.631 ========= 00:28:02.631 00:28:02.631 Active Namespaces 00:28:02.631 ================= 00:28:02.631 Discovery Log Page 00:28:02.631 ================== 00:28:02.631 Generation Counter: 2 00:28:02.631 Number of Records: 2 00:28:02.631 Record Format: 0 00:28:02.631 00:28:02.631 Discovery Log Entry 0 00:28:02.631 ---------------------- 00:28:02.631 Transport Type: 3 (TCP) 00:28:02.631 Address Family: 1 (IPv4) 00:28:02.631 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:02.631 Entry Flags: 00:28:02.631 Duplicate Returned Information: 0 00:28:02.631 Explicit Persistent Connection Support for Discovery: 0 00:28:02.631 Transport Requirements: 00:28:02.631 Secure Channel: Not Specified 00:28:02.631 Port ID: 1 (0x0001) 00:28:02.631 Controller ID: 65535 (0xffff) 00:28:02.631 Admin Max SQ Size: 32 00:28:02.631 Transport Service Identifier: 4420 00:28:02.631 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:02.631 Transport Address: 10.0.0.1 00:28:02.631 Discovery Log Entry 1 00:28:02.631 ---------------------- 00:28:02.631 Transport Type: 3 (TCP) 00:28:02.631 Address Family: 1 (IPv4) 00:28:02.631 Subsystem Type: 2 (NVM Subsystem) 00:28:02.631 Entry Flags: 00:28:02.631 Duplicate Returned Information: 0 00:28:02.631 Explicit Persistent Connection Support for Discovery: 0 00:28:02.631 Transport Requirements: 00:28:02.631 Secure Channel: Not Specified 00:28:02.631 Port ID: 1 (0x0001) 00:28:02.631 Controller ID: 65535 (0xffff) 00:28:02.631 Admin Max SQ Size: 32 00:28:02.631 Transport Service Identifier: 4420 00:28:02.631 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:02.631 Transport Address: 10.0.0.1 00:28:02.631 05:22:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:02.631 get_feature(0x01) failed 00:28:02.631 get_feature(0x02) failed 00:28:02.631 get_feature(0x04) failed 00:28:02.631 ===================================================== 00:28:02.631 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:02.631 ===================================================== 00:28:02.631 Controller Capabilities/Features 00:28:02.631 ================================ 00:28:02.631 Vendor ID: 0000 00:28:02.631 Subsystem Vendor ID: 0000 00:28:02.631 Serial Number: 1ddd50021b8f139eeaeb 00:28:02.631 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:02.631 Firmware Version: 6.8.9-20 00:28:02.631 Recommended Arb Burst: 6 00:28:02.631 IEEE OUI Identifier: 00 00 00 00:28:02.631 Multi-path I/O 00:28:02.631 May have multiple subsystem ports: Yes 00:28:02.631 May have multiple controllers: Yes 00:28:02.631 Associated with SR-IOV VF: No 00:28:02.631 Max Data Transfer Size: Unlimited 00:28:02.631 Max Number of Namespaces: 1024 00:28:02.631 Max Number of I/O Queues: 128 00:28:02.631 NVMe Specification Version (VS): 1.3 00:28:02.631 NVMe Specification Version (Identify): 1.3 00:28:02.631 Maximum Queue Entries: 1024 00:28:02.631 Contiguous Queues Required: No 00:28:02.631 Arbitration Mechanisms Supported 00:28:02.631 Weighted Round Robin: Not Supported 00:28:02.631 Vendor Specific: Not Supported 00:28:02.631 Reset Timeout: 7500 ms 00:28:02.631 Doorbell Stride: 4 bytes 00:28:02.631 NVM Subsystem Reset: Not Supported 00:28:02.631 Command Sets Supported 00:28:02.631 NVM Command Set: Supported 00:28:02.631 Boot Partition: Not Supported 00:28:02.631 Memory Page Size Minimum: 4096 bytes 00:28:02.631 Memory Page Size Maximum: 4096 bytes 00:28:02.631 Persistent Memory Region: Not Supported 00:28:02.631 Optional Asynchronous Events Supported 00:28:02.631 Namespace Attribute Notices: Supported 00:28:02.631 Firmware Activation Notices: Not Supported 00:28:02.631 ANA Change Notices: Supported 00:28:02.631 PLE Aggregate Log Change Notices: Not Supported 00:28:02.631 LBA Status Info Alert Notices: Not Supported 00:28:02.631 EGE Aggregate Log Change Notices: Not Supported 00:28:02.631 Normal NVM Subsystem Shutdown event: Not Supported 00:28:02.631 Zone Descriptor Change Notices: Not Supported 00:28:02.631 Discovery Log Change Notices: Not Supported 00:28:02.631 Controller Attributes 00:28:02.631 128-bit Host Identifier: Supported 00:28:02.631 Non-Operational Permissive Mode: Not Supported 00:28:02.631 NVM Sets: Not Supported 00:28:02.631 Read Recovery Levels: Not Supported 00:28:02.631 Endurance Groups: Not Supported 00:28:02.631 Predictable Latency Mode: Not Supported 00:28:02.631 Traffic Based Keep ALive: Supported 00:28:02.631 Namespace Granularity: Not Supported 00:28:02.631 SQ Associations: Not Supported 00:28:02.631 UUID List: Not Supported 00:28:02.631 Multi-Domain Subsystem: Not Supported 00:28:02.631 Fixed Capacity Management: Not Supported 00:28:02.631 Variable Capacity Management: Not Supported 00:28:02.631 Delete Endurance Group: Not Supported 00:28:02.631 Delete NVM Set: Not Supported 00:28:02.631 Extended LBA Formats Supported: Not Supported 00:28:02.631 Flexible Data Placement Supported: Not Supported 00:28:02.631 00:28:02.631 Controller Memory Buffer Support 00:28:02.631 ================================ 00:28:02.632 Supported: No 00:28:02.632 00:28:02.632 Persistent Memory Region Support 00:28:02.632 ================================ 00:28:02.632 Supported: No 00:28:02.632 00:28:02.632 Admin Command Set Attributes 00:28:02.632 ============================ 00:28:02.632 Security Send/Receive: Not Supported 00:28:02.632 Format NVM: Not Supported 00:28:02.632 Firmware Activate/Download: Not Supported 00:28:02.632 Namespace Management: Not Supported 00:28:02.632 Device Self-Test: Not Supported 00:28:02.632 Directives: Not Supported 00:28:02.632 NVMe-MI: Not Supported 00:28:02.632 Virtualization Management: Not Supported 00:28:02.632 Doorbell Buffer Config: Not Supported 00:28:02.632 Get LBA Status Capability: Not Supported 00:28:02.632 Command & Feature Lockdown Capability: Not Supported 00:28:02.632 Abort Command Limit: 4 00:28:02.632 Async Event Request Limit: 4 00:28:02.632 Number of Firmware Slots: N/A 00:28:02.632 Firmware Slot 1 Read-Only: N/A 00:28:02.632 Firmware Activation Without Reset: N/A 00:28:02.632 Multiple Update Detection Support: N/A 00:28:02.632 Firmware Update Granularity: No Information Provided 00:28:02.632 Per-Namespace SMART Log: Yes 00:28:02.632 Asymmetric Namespace Access Log Page: Supported 00:28:02.632 ANA Transition Time : 10 sec 00:28:02.632 00:28:02.632 Asymmetric Namespace Access Capabilities 00:28:02.632 ANA Optimized State : Supported 00:28:02.632 ANA Non-Optimized State : Supported 00:28:02.632 ANA Inaccessible State : Supported 00:28:02.632 ANA Persistent Loss State : Supported 00:28:02.632 ANA Change State : Supported 00:28:02.632 ANAGRPID is not changed : No 00:28:02.632 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:02.632 00:28:02.632 ANA Group Identifier Maximum : 128 00:28:02.632 Number of ANA Group Identifiers : 128 00:28:02.632 Max Number of Allowed Namespaces : 1024 00:28:02.632 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:02.632 Command Effects Log Page: Supported 00:28:02.632 Get Log Page Extended Data: Supported 00:28:02.632 Telemetry Log Pages: Not Supported 00:28:02.632 Persistent Event Log Pages: Not Supported 00:28:02.632 Supported Log Pages Log Page: May Support 00:28:02.632 Commands Supported & Effects Log Page: Not Supported 00:28:02.632 Feature Identifiers & Effects Log Page:May Support 00:28:02.632 NVMe-MI Commands & Effects Log Page: May Support 00:28:02.632 Data Area 4 for Telemetry Log: Not Supported 00:28:02.632 Error Log Page Entries Supported: 128 00:28:02.632 Keep Alive: Supported 00:28:02.632 Keep Alive Granularity: 1000 ms 00:28:02.632 00:28:02.632 NVM Command Set Attributes 00:28:02.632 ========================== 00:28:02.632 Submission Queue Entry Size 00:28:02.632 Max: 64 00:28:02.632 Min: 64 00:28:02.632 Completion Queue Entry Size 00:28:02.632 Max: 16 00:28:02.632 Min: 16 00:28:02.632 Number of Namespaces: 1024 00:28:02.632 Compare Command: Not Supported 00:28:02.632 Write Uncorrectable Command: Not Supported 00:28:02.632 Dataset Management Command: Supported 00:28:02.632 Write Zeroes Command: Supported 00:28:02.632 Set Features Save Field: Not Supported 00:28:02.632 Reservations: Not Supported 00:28:02.632 Timestamp: Not Supported 00:28:02.632 Copy: Not Supported 00:28:02.632 Volatile Write Cache: Present 00:28:02.632 Atomic Write Unit (Normal): 1 00:28:02.632 Atomic Write Unit (PFail): 1 00:28:02.632 Atomic Compare & Write Unit: 1 00:28:02.632 Fused Compare & Write: Not Supported 00:28:02.632 Scatter-Gather List 00:28:02.632 SGL Command Set: Supported 00:28:02.632 SGL Keyed: Not Supported 00:28:02.632 SGL Bit Bucket Descriptor: Not Supported 00:28:02.632 SGL Metadata Pointer: Not Supported 00:28:02.632 Oversized SGL: Not Supported 00:28:02.632 SGL Metadata Address: Not Supported 00:28:02.632 SGL Offset: Supported 00:28:02.632 Transport SGL Data Block: Not Supported 00:28:02.632 Replay Protected Memory Block: Not Supported 00:28:02.632 00:28:02.632 Firmware Slot Information 00:28:02.632 ========================= 00:28:02.632 Active slot: 0 00:28:02.632 00:28:02.632 Asymmetric Namespace Access 00:28:02.632 =========================== 00:28:02.632 Change Count : 0 00:28:02.632 Number of ANA Group Descriptors : 1 00:28:02.632 ANA Group Descriptor : 0 00:28:02.632 ANA Group ID : 1 00:28:02.632 Number of NSID Values : 1 00:28:02.632 Change Count : 0 00:28:02.632 ANA State : 1 00:28:02.632 Namespace Identifier : 1 00:28:02.632 00:28:02.632 Commands Supported and Effects 00:28:02.632 ============================== 00:28:02.632 Admin Commands 00:28:02.632 -------------- 00:28:02.632 Get Log Page (02h): Supported 00:28:02.632 Identify (06h): Supported 00:28:02.632 Abort (08h): Supported 00:28:02.632 Set Features (09h): Supported 00:28:02.632 Get Features (0Ah): Supported 00:28:02.632 Asynchronous Event Request (0Ch): Supported 00:28:02.632 Keep Alive (18h): Supported 00:28:02.632 I/O Commands 00:28:02.632 ------------ 00:28:02.632 Flush (00h): Supported 00:28:02.632 Write (01h): Supported LBA-Change 00:28:02.632 Read (02h): Supported 00:28:02.632 Write Zeroes (08h): Supported LBA-Change 00:28:02.632 Dataset Management (09h): Supported 00:28:02.632 00:28:02.632 Error Log 00:28:02.632 ========= 00:28:02.632 Entry: 0 00:28:02.632 Error Count: 0x3 00:28:02.632 Submission Queue Id: 0x0 00:28:02.632 Command Id: 0x5 00:28:02.632 Phase Bit: 0 00:28:02.632 Status Code: 0x2 00:28:02.632 Status Code Type: 0x0 00:28:02.632 Do Not Retry: 1 00:28:02.632 Error Location: 0x28 00:28:02.632 LBA: 0x0 00:28:02.632 Namespace: 0x0 00:28:02.632 Vendor Log Page: 0x0 00:28:02.632 ----------- 00:28:02.632 Entry: 1 00:28:02.632 Error Count: 0x2 00:28:02.632 Submission Queue Id: 0x0 00:28:02.632 Command Id: 0x5 00:28:02.632 Phase Bit: 0 00:28:02.632 Status Code: 0x2 00:28:02.632 Status Code Type: 0x0 00:28:02.632 Do Not Retry: 1 00:28:02.632 Error Location: 0x28 00:28:02.632 LBA: 0x0 00:28:02.632 Namespace: 0x0 00:28:02.632 Vendor Log Page: 0x0 00:28:02.632 ----------- 00:28:02.632 Entry: 2 00:28:02.632 Error Count: 0x1 00:28:02.632 Submission Queue Id: 0x0 00:28:02.632 Command Id: 0x4 00:28:02.632 Phase Bit: 0 00:28:02.632 Status Code: 0x2 00:28:02.632 Status Code Type: 0x0 00:28:02.632 Do Not Retry: 1 00:28:02.632 Error Location: 0x28 00:28:02.632 LBA: 0x0 00:28:02.632 Namespace: 0x0 00:28:02.632 Vendor Log Page: 0x0 00:28:02.632 00:28:02.632 Number of Queues 00:28:02.632 ================ 00:28:02.632 Number of I/O Submission Queues: 128 00:28:02.632 Number of I/O Completion Queues: 128 00:28:02.632 00:28:02.632 ZNS Specific Controller Data 00:28:02.632 ============================ 00:28:02.632 Zone Append Size Limit: 0 00:28:02.632 00:28:02.632 00:28:02.632 Active Namespaces 00:28:02.632 ================= 00:28:02.632 get_feature(0x05) failed 00:28:02.632 Namespace ID:1 00:28:02.632 Command Set Identifier: NVM (00h) 00:28:02.632 Deallocate: Supported 00:28:02.632 Deallocated/Unwritten Error: Not Supported 00:28:02.632 Deallocated Read Value: Unknown 00:28:02.632 Deallocate in Write Zeroes: Not Supported 00:28:02.632 Deallocated Guard Field: 0xFFFF 00:28:02.632 Flush: Supported 00:28:02.632 Reservation: Not Supported 00:28:02.632 Namespace Sharing Capabilities: Multiple Controllers 00:28:02.632 Size (in LBAs): 3125627568 (1490GiB) 00:28:02.632 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:02.632 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:02.632 UUID: 2813f0c5-77f0-42f2-b070-f332c5020237 00:28:02.632 Thin Provisioning: Not Supported 00:28:02.632 Per-NS Atomic Units: Yes 00:28:02.632 Atomic Boundary Size (Normal): 0 00:28:02.632 Atomic Boundary Size (PFail): 0 00:28:02.632 Atomic Boundary Offset: 0 00:28:02.632 NGUID/EUI64 Never Reused: No 00:28:02.632 ANA group ID: 1 00:28:02.632 Namespace Write Protected: No 00:28:02.632 Number of LBA Formats: 1 00:28:02.632 Current LBA Format: LBA Format #00 00:28:02.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:02.632 00:28:02.632 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:02.632 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.632 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:02.633 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.633 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:02.633 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.633 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.633 rmmod nvme_tcp 00:28:02.633 rmmod nvme_fabrics 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.891 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.892 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.892 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.892 05:22:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:04.797 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:05.055 05:22:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:08.344 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:08.344 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:08.603 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:08.603 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:09.983 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:10.243 00:28:10.243 real 0m20.436s 00:28:10.243 user 0m5.038s 00:28:10.243 sys 0m10.956s 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.243 ************************************ 00:28:10.243 END TEST nvmf_identify_kernel_target 00:28:10.243 ************************************ 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.243 ************************************ 00:28:10.243 START TEST nvmf_auth_host 00:28:10.243 ************************************ 00:28:10.243 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:10.503 * Looking for test storage... 00:28:10.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:10.503 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.504 --rc genhtml_branch_coverage=1 00:28:10.504 --rc genhtml_function_coverage=1 00:28:10.504 --rc genhtml_legend=1 00:28:10.504 --rc geninfo_all_blocks=1 00:28:10.504 --rc geninfo_unexecuted_blocks=1 00:28:10.504 00:28:10.504 ' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.504 --rc genhtml_branch_coverage=1 00:28:10.504 --rc genhtml_function_coverage=1 00:28:10.504 --rc genhtml_legend=1 00:28:10.504 --rc geninfo_all_blocks=1 00:28:10.504 --rc geninfo_unexecuted_blocks=1 00:28:10.504 00:28:10.504 ' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.504 --rc genhtml_branch_coverage=1 00:28:10.504 --rc genhtml_function_coverage=1 00:28:10.504 --rc genhtml_legend=1 00:28:10.504 --rc geninfo_all_blocks=1 00:28:10.504 --rc geninfo_unexecuted_blocks=1 00:28:10.504 00:28:10.504 ' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:10.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.504 --rc genhtml_branch_coverage=1 00:28:10.504 --rc genhtml_function_coverage=1 00:28:10.504 --rc genhtml_legend=1 00:28:10.504 --rc geninfo_all_blocks=1 00:28:10.504 --rc geninfo_unexecuted_blocks=1 00:28:10.504 00:28:10.504 ' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:10.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.504 05:22:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.636 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:18.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:18.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:18.637 Found net devices under 0000:af:00.0: cvl_0_0 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:18.637 Found net devices under 0000:af:00.1: cvl_0_1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:18.637 05:22:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:28:18.637 00:28:18.637 --- 10.0.0.2 ping statistics --- 00:28:18.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.637 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:28:18.637 00:28:18.637 --- 10.0.0.1 ping statistics --- 00:28:18.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.637 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=630641 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 630641 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 630641 ']' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.637 05:23:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:18.637 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1071a53810dbad4bf09c0b6ea4b19967 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9RG 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1071a53810dbad4bf09c0b6ea4b19967 0 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1071a53810dbad4bf09c0b6ea4b19967 0 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1071a53810dbad4bf09c0b6ea4b19967 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:18.638 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9RG 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9RG 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9RG 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ecaa6d4ac258aa16981f9bccdd5e94f7ef1010af21279a9fd3a9d90a3fa1103 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ys0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ecaa6d4ac258aa16981f9bccdd5e94f7ef1010af21279a9fd3a9d90a3fa1103 3 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ecaa6d4ac258aa16981f9bccdd5e94f7ef1010af21279a9fd3a9d90a3fa1103 3 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ecaa6d4ac258aa16981f9bccdd5e94f7ef1010af21279a9fd3a9d90a3fa1103 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ys0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ys0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ys0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=384fc4ffb574ac0022108f79be9d2667dd1b301d99d2d9c0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gBU 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 384fc4ffb574ac0022108f79be9d2667dd1b301d99d2d9c0 0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 384fc4ffb574ac0022108f79be9d2667dd1b301d99d2d9c0 0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=384fc4ffb574ac0022108f79be9d2667dd1b301d99d2d9c0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gBU 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gBU 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.gBU 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1281fa57c92e645269407153223e9529b0863df49deb40d7 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jlA 00:28:18.899 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1281fa57c92e645269407153223e9529b0863df49deb40d7 2 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1281fa57c92e645269407153223e9529b0863df49deb40d7 2 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1281fa57c92e645269407153223e9529b0863df49deb40d7 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jlA 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jlA 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jlA 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f74b22d3ba01b49942f4c31bbbd77b87 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2fO 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f74b22d3ba01b49942f4c31bbbd77b87 1 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f74b22d3ba01b49942f4c31bbbd77b87 1 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f74b22d3ba01b49942f4c31bbbd77b87 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:18.900 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2fO 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2fO 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2fO 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=45af9328ed7a0250b8f13e7f175a3fe9 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.v1s 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 45af9328ed7a0250b8f13e7f175a3fe9 1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 45af9328ed7a0250b8f13e7f175a3fe9 1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=45af9328ed7a0250b8f13e7f175a3fe9 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.v1s 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.v1s 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.v1s 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83df5c91277be10324ee59d20c04d42732386aeb61054896 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lmC 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83df5c91277be10324ee59d20c04d42732386aeb61054896 2 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83df5c91277be10324ee59d20c04d42732386aeb61054896 2 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83df5c91277be10324ee59d20c04d42732386aeb61054896 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lmC 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lmC 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.lmC 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13d38eb09e9649ba7ef4bcd798ba3582 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xx4 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13d38eb09e9649ba7ef4bcd798ba3582 0 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13d38eb09e9649ba7ef4bcd798ba3582 0 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13d38eb09e9649ba7ef4bcd798ba3582 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xx4 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xx4 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xx4 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc8be2bd27d5bc47321664b34e9c9031de4d96ca2cc6ccd401c39f1e32b91195 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.U0w 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc8be2bd27d5bc47321664b34e9c9031de4d96ca2cc6ccd401c39f1e32b91195 3 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc8be2bd27d5bc47321664b34e9c9031de4d96ca2cc6ccd401c39f1e32b91195 3 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc8be2bd27d5bc47321664b34e9c9031de4d96ca2cc6ccd401c39f1e32b91195 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:19.160 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.U0w 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.U0w 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.U0w 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 630641 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 630641 ']' 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9RG 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ys0 ]] 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ys0 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.gBU 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.420 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jlA ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jlA 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2fO 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.v1s ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.v1s 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.679 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.lmC 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xx4 ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xx4 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.U0w 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:19.680 05:23:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:22.972 Waiting for block devices as requested 00:28:22.972 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:22.972 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:22.972 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:23.231 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:23.231 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:23.231 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:23.490 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:23.490 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:23.490 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:23.748 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:23.748 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:23.748 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:24.007 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:24.007 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:24.007 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:24.266 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:24.266 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:25.204 No valid GPT data, bailing 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:25.204 00:28:25.204 Discovery Log Number of Records 2, Generation counter 2 00:28:25.204 =====Discovery Log Entry 0====== 00:28:25.204 trtype: tcp 00:28:25.204 adrfam: ipv4 00:28:25.204 subtype: current discovery subsystem 00:28:25.204 treq: not specified, sq flow control disable supported 00:28:25.204 portid: 1 00:28:25.204 trsvcid: 4420 00:28:25.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:25.204 traddr: 10.0.0.1 00:28:25.204 eflags: none 00:28:25.204 sectype: none 00:28:25.204 =====Discovery Log Entry 1====== 00:28:25.204 trtype: tcp 00:28:25.204 adrfam: ipv4 00:28:25.204 subtype: nvme subsystem 00:28:25.204 treq: not specified, sq flow control disable supported 00:28:25.204 portid: 1 00:28:25.204 trsvcid: 4420 00:28:25.204 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:25.204 traddr: 10.0.0.1 00:28:25.204 eflags: none 00:28:25.204 sectype: none 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:25.204 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.205 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.464 nvme0n1 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.464 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.465 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.724 nvme0n1 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.724 05:23:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.724 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.725 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 nvme0n1 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 nvme0n1 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.984 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.243 nvme0n1 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.243 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.502 nvme0n1 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:26.502 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:26.503 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.503 05:23:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.760 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.761 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.019 nvme0n1 00:28:27.019 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.020 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 nvme0n1 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.279 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.538 nvme0n1 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.538 05:23:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.798 nvme0n1 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.798 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.057 nvme0n1 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.057 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.626 05:23:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.626 nvme0n1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.886 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.146 nvme0n1 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.146 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.406 nvme0n1 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.406 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.407 05:23:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.667 nvme0n1 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.667 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.926 nvme0n1 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.926 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.186 05:23:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.567 05:23:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.827 nvme0n1 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.827 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.828 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.087 nvme0n1 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.087 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.347 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.607 nvme0n1 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.607 05:23:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.607 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.607 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.607 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.607 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.607 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.608 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.176 nvme0n1 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.176 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.435 nvme0n1 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.435 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.436 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.436 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.436 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.709 05:23:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.278 nvme0n1 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.278 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.279 05:23:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.857 nvme0n1 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.857 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.552 nvme0n1 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.552 05:23:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.212 nvme0n1 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.212 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 nvme0n1 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 05:23:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:36.780 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.781 nvme0n1 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.781 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:37.040 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.041 nvme0n1 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.041 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.301 nvme0n1 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.301 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.560 nvme0n1 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.560 05:23:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.819 nvme0n1 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.819 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.820 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.079 nvme0n1 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.079 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.080 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.338 nvme0n1 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:38.338 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.339 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.597 nvme0n1 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:38.597 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.598 05:23:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.856 nvme0n1 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.856 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.857 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.115 nvme0n1 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.115 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.116 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.375 nvme0n1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.375 05:23:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.634 nvme0n1 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.634 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.635 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.894 nvme0n1 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.894 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.154 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.414 nvme0n1 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.414 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.415 05:23:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.674 nvme0n1 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.674 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.243 nvme0n1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.243 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.503 nvme0n1 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.503 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.762 05:23:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.022 nvme0n1 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.022 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.023 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.592 nvme0n1 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.592 05:23:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.857 nvme0n1 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.857 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.118 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.687 nvme0n1 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.687 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.688 05:23:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.256 nvme0n1 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.257 05:23:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.827 nvme0n1 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.827 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.396 nvme0n1 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.396 05:23:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.965 nvme0n1 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.965 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.226 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 nvme0n1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.227 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.487 nvme0n1 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.487 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.488 05:23:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.747 nvme0n1 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.747 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.007 nvme0n1 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.007 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.266 nvme0n1 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.266 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.267 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 nvme0n1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.526 05:23:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.785 nvme0n1 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.785 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.045 nvme0n1 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.045 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.304 nvme0n1 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.304 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.305 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 nvme0n1 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.564 05:23:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.824 nvme0n1 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.824 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.825 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.084 nvme0n1 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.084 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.343 nvme0n1 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.343 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.603 05:23:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.862 nvme0n1 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.862 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.121 nvme0n1 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.121 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.689 nvme0n1 00:28:50.689 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.690 05:23:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.950 nvme0n1 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.950 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.518 nvme0n1 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.518 05:23:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.793 nvme0n1 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.793 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.052 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.312 nvme0n1 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTA3MWE1MzgxMGRiYWQ0YmYwOWMwYjZlYTRiMTk5NjdfFthp: 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmVjYWE2ZDRhYzI1OGFhMTY5ODFmOWJjY2RkNWU5NGY3ZWYxMDEwYWYyMTI3OWE5ZmQzYTlkOTBhM2ZhMTEwM7Xzq4c=: 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.312 05:23:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.884 nvme0n1 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.884 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:52.885 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.144 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.712 nvme0n1 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.712 05:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.712 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.281 nvme0n1 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNkZjVjOTEyNzdiZTEwMzI0ZWU1OWQyMGMwNGQ0MjczMjM4NmFlYjYxMDU0ODk24OCisg==: 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTNkMzhlYjA5ZTk2NDliYTdlZjRiY2Q3OThiYTM1ODJRQPRD: 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.281 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.282 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.282 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.282 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.282 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.282 05:23:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.851 nvme0n1 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmM4YmUyYmQyN2Q1YmM0NzMyMTY2NGIzNGU5YzkwMzFkZTRkOTZjYTJjYzZjY2Q0MDFjMzlmMWUzMmI5MTE5Ncvlu6s=: 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.851 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.421 nvme0n1 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:55.421 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.682 request: 00:28:55.682 { 00:28:55.682 "name": "nvme0", 00:28:55.682 "trtype": "tcp", 00:28:55.682 "traddr": "10.0.0.1", 00:28:55.682 "adrfam": "ipv4", 00:28:55.682 "trsvcid": "4420", 00:28:55.682 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:55.682 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:55.682 "prchk_reftag": false, 00:28:55.682 "prchk_guard": false, 00:28:55.682 "hdgst": false, 00:28:55.682 "ddgst": false, 00:28:55.682 "allow_unrecognized_csi": false, 00:28:55.682 "method": "bdev_nvme_attach_controller", 00:28:55.682 "req_id": 1 00:28:55.682 } 00:28:55.682 Got JSON-RPC error response 00:28:55.682 response: 00:28:55.682 { 00:28:55.682 "code": -5, 00:28:55.682 "message": "Input/output error" 00:28:55.682 } 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.682 05:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.682 request: 00:28:55.682 { 00:28:55.682 "name": "nvme0", 00:28:55.682 "trtype": "tcp", 00:28:55.682 "traddr": "10.0.0.1", 00:28:55.682 "adrfam": "ipv4", 00:28:55.682 "trsvcid": "4420", 00:28:55.682 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:55.682 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:55.682 "prchk_reftag": false, 00:28:55.682 "prchk_guard": false, 00:28:55.682 "hdgst": false, 00:28:55.682 "ddgst": false, 00:28:55.682 "dhchap_key": "key2", 00:28:55.682 "allow_unrecognized_csi": false, 00:28:55.682 "method": "bdev_nvme_attach_controller", 00:28:55.682 "req_id": 1 00:28:55.682 } 00:28:55.682 Got JSON-RPC error response 00:28:55.682 response: 00:28:55.682 { 00:28:55.682 "code": -5, 00:28:55.682 "message": "Input/output error" 00:28:55.682 } 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.682 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.942 request: 00:28:55.942 { 00:28:55.942 "name": "nvme0", 00:28:55.942 "trtype": "tcp", 00:28:55.942 "traddr": "10.0.0.1", 00:28:55.942 "adrfam": "ipv4", 00:28:55.942 "trsvcid": "4420", 00:28:55.942 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:55.942 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:55.942 "prchk_reftag": false, 00:28:55.942 "prchk_guard": false, 00:28:55.942 "hdgst": false, 00:28:55.942 "ddgst": false, 00:28:55.942 "dhchap_key": "key1", 00:28:55.942 "dhchap_ctrlr_key": "ckey2", 00:28:55.942 "allow_unrecognized_csi": false, 00:28:55.942 "method": "bdev_nvme_attach_controller", 00:28:55.942 "req_id": 1 00:28:55.942 } 00:28:55.942 Got JSON-RPC error response 00:28:55.942 response: 00:28:55.942 { 00:28:55.942 "code": -5, 00:28:55.942 "message": "Input/output error" 00:28:55.942 } 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.942 nvme0n1 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.942 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.943 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.202 request: 00:28:56.202 { 00:28:56.202 "name": "nvme0", 00:28:56.202 "dhchap_key": "key1", 00:28:56.202 "dhchap_ctrlr_key": "ckey2", 00:28:56.202 "method": "bdev_nvme_set_keys", 00:28:56.202 "req_id": 1 00:28:56.202 } 00:28:56.202 Got JSON-RPC error response 00:28:56.202 response: 00:28:56.202 { 00:28:56.202 "code": -13, 00:28:56.202 "message": "Permission denied" 00:28:56.202 } 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:56.202 05:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:57.138 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:57.138 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.138 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.138 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.138 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.397 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:57.397 05:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzg0ZmM0ZmZiNTc0YWMwMDIyMTA4Zjc5YmU5ZDI2NjdkZDFiMzAxZDk5ZDJkOWMwhu6RNw==: 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: ]] 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTI4MWZhNTdjOTJlNjQ1MjY5NDA3MTUzMjIzZTk1MjliMDg2M2RmNDlkZWI0MGQ3l6R/Ew==: 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.333 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.591 nvme0n1 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjc0YjIyZDNiYTAxYjQ5OTQyZjRjMzFiYmJkNzdiODcdeHMO: 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: ]] 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVhZjkzMjhlZDdhMDI1MGI4ZjEzZTdmMTc1YTNmZTn/8Umw: 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.591 request: 00:28:58.591 { 00:28:58.591 "name": "nvme0", 00:28:58.591 "dhchap_key": "key2", 00:28:58.591 "dhchap_ctrlr_key": "ckey1", 00:28:58.591 "method": "bdev_nvme_set_keys", 00:28:58.591 "req_id": 1 00:28:58.591 } 00:28:58.591 Got JSON-RPC error response 00:28:58.591 response: 00:28:58.591 { 00:28:58.591 "code": -13, 00:28:58.591 "message": "Permission denied" 00:28:58.591 } 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:58.591 05:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.530 05:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.530 rmmod nvme_tcp 00:28:59.530 rmmod nvme_fabrics 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 630641 ']' 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 630641 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 630641 ']' 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 630641 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630641 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630641' 00:28:59.789 killing process with pid 630641 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 630641 00:28:59.789 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 630641 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.047 05:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.952 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:01.953 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:02.212 05:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:05.502 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:05.502 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:05.761 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:05.761 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:05.761 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:05.761 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.139 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:29:07.397 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9RG /tmp/spdk.key-null.gBU /tmp/spdk.key-sha256.2fO /tmp/spdk.key-sha384.lmC /tmp/spdk.key-sha512.U0w /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:07.397 05:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:10.685 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:10.685 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:10.949 00:29:10.949 real 1m0.538s 00:29:10.949 user 0m52.637s 00:29:10.949 sys 0m16.077s 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.949 ************************************ 00:29:10.949 END TEST nvmf_auth_host 00:29:10.949 ************************************ 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.949 ************************************ 00:29:10.949 START TEST nvmf_digest 00:29:10.949 ************************************ 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:10.949 * Looking for test storage... 00:29:10.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:10.949 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.209 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.209 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.209 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.210 --rc genhtml_branch_coverage=1 00:29:11.210 --rc genhtml_function_coverage=1 00:29:11.210 --rc genhtml_legend=1 00:29:11.210 --rc geninfo_all_blocks=1 00:29:11.210 --rc geninfo_unexecuted_blocks=1 00:29:11.210 00:29:11.210 ' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.210 --rc genhtml_branch_coverage=1 00:29:11.210 --rc genhtml_function_coverage=1 00:29:11.210 --rc genhtml_legend=1 00:29:11.210 --rc geninfo_all_blocks=1 00:29:11.210 --rc geninfo_unexecuted_blocks=1 00:29:11.210 00:29:11.210 ' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.210 --rc genhtml_branch_coverage=1 00:29:11.210 --rc genhtml_function_coverage=1 00:29:11.210 --rc genhtml_legend=1 00:29:11.210 --rc geninfo_all_blocks=1 00:29:11.210 --rc geninfo_unexecuted_blocks=1 00:29:11.210 00:29:11.210 ' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.210 --rc genhtml_branch_coverage=1 00:29:11.210 --rc genhtml_function_coverage=1 00:29:11.210 --rc genhtml_legend=1 00:29:11.210 --rc geninfo_all_blocks=1 00:29:11.210 --rc geninfo_unexecuted_blocks=1 00:29:11.210 00:29:11.210 ' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.210 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.211 05:23:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.335 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.336 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.336 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.336 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.336 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.336 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:29:19.337 00:29:19.337 --- 10.0.0.2 ping statistics --- 00:29:19.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.337 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:19.337 00:29:19.337 --- 10.0.0.1 ping statistics --- 00:29:19.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.337 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.337 ************************************ 00:29:19.337 START TEST nvmf_digest_clean 00:29:19.337 ************************************ 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=645732 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 645732 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 645732 ']' 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.337 05:24:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.337 [2024-12-09 05:24:00.902375] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:19.337 [2024-12-09 05:24:00.902426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.337 [2024-12-09 05:24:01.002400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.337 [2024-12-09 05:24:01.042960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.337 [2024-12-09 05:24:01.042999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.337 [2024-12-09 05:24:01.043008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.337 [2024-12-09 05:24:01.043017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.337 [2024-12-09 05:24:01.043025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.338 [2024-12-09 05:24:01.043620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.338 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.597 null0 00:29:19.598 [2024-12-09 05:24:01.861106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.598 [2024-12-09 05:24:01.885316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=646077 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 646077 /var/tmp/bperf.sock 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 646077 ']' 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.598 05:24:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.598 [2024-12-09 05:24:01.941653] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:19.598 [2024-12-09 05:24:01.941700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646077 ] 00:29:19.598 [2024-12-09 05:24:02.015729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.598 [2024-12-09 05:24:02.055474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.856 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.856 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:19.856 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:19.856 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:19.857 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:20.115 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.115 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.374 nvme0n1 00:29:20.374 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:20.374 05:24:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.374 Running I/O for 2 seconds... 00:29:22.689 26681.00 IOPS, 104.22 MiB/s [2024-12-09T04:24:05.159Z] 26784.00 IOPS, 104.62 MiB/s 00:29:22.689 Latency(us) 00:29:22.689 [2024-12-09T04:24:05.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.689 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:22.689 nvme0n1 : 2.00 26802.02 104.70 0.00 0.00 4770.90 2202.01 12897.48 00:29:22.689 [2024-12-09T04:24:05.159Z] =================================================================================================================== 00:29:22.689 [2024-12-09T04:24:05.159Z] Total : 26802.02 104.70 0.00 0.00 4770.90 2202.01 12897.48 00:29:22.689 { 00:29:22.689 "results": [ 00:29:22.689 { 00:29:22.689 "job": "nvme0n1", 00:29:22.689 "core_mask": "0x2", 00:29:22.689 "workload": "randread", 00:29:22.689 "status": "finished", 00:29:22.689 "queue_depth": 128, 00:29:22.689 "io_size": 4096, 00:29:22.689 "runtime": 2.003431, 00:29:22.689 "iops": 26802.021132746773, 00:29:22.689 "mibps": 104.69539504979208, 00:29:22.689 "io_failed": 0, 00:29:22.689 "io_timeout": 0, 00:29:22.689 "avg_latency_us": 4770.902045292015, 00:29:22.689 "min_latency_us": 2202.0096, 00:29:22.689 "max_latency_us": 12897.4848 00:29:22.689 } 00:29:22.689 ], 00:29:22.689 "core_count": 1 00:29:22.689 } 00:29:22.689 05:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:22.689 05:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:22.689 05:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:22.689 05:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:22.689 | select(.opcode=="crc32c") 00:29:22.689 | "\(.module_name) \(.executed)"' 00:29:22.689 05:24:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:22.689 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:22.689 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:22.689 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 646077 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 646077 ']' 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 646077 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646077 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646077' 00:29:22.690 killing process with pid 646077 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 646077 00:29:22.690 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.690 00:29:22.690 Latency(us) 00:29:22.690 [2024-12-09T04:24:05.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.690 [2024-12-09T04:24:05.160Z] =================================================================================================================== 00:29:22.690 [2024-12-09T04:24:05.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.690 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 646077 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=646957 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 646957 /var/tmp/bperf.sock 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 646957 ']' 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.949 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:22.949 [2024-12-09 05:24:05.352390] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:22.949 [2024-12-09 05:24:05.352446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646957 ] 00:29:22.949 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.949 Zero copy mechanism will not be used. 00:29:23.208 [2024-12-09 05:24:05.443971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.208 [2024-12-09 05:24:05.481792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.208 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.208 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:23.208 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:23.208 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:23.208 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:23.467 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.467 05:24:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.726 nvme0n1 00:29:23.726 05:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:23.726 05:24:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.984 Zero copy mechanism will not be used. 00:29:23.984 Running I/O for 2 seconds... 00:29:25.859 6122.00 IOPS, 765.25 MiB/s [2024-12-09T04:24:08.329Z] 5797.50 IOPS, 724.69 MiB/s 00:29:25.859 Latency(us) 00:29:25.859 [2024-12-09T04:24:08.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.859 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:25.859 nvme0n1 : 2.00 5797.04 724.63 0.00 0.00 2757.47 622.59 5190.45 00:29:25.859 [2024-12-09T04:24:08.329Z] =================================================================================================================== 00:29:25.859 [2024-12-09T04:24:08.329Z] Total : 5797.04 724.63 0.00 0.00 2757.47 622.59 5190.45 00:29:25.859 { 00:29:25.859 "results": [ 00:29:25.859 { 00:29:25.859 "job": "nvme0n1", 00:29:25.859 "core_mask": "0x2", 00:29:25.859 "workload": "randread", 00:29:25.859 "status": "finished", 00:29:25.859 "queue_depth": 16, 00:29:25.859 "io_size": 131072, 00:29:25.859 "runtime": 2.002918, 00:29:25.859 "iops": 5797.042115553408, 00:29:25.859 "mibps": 724.630264444176, 00:29:25.859 "io_failed": 0, 00:29:25.859 "io_timeout": 0, 00:29:25.859 "avg_latency_us": 2757.4672745499956, 00:29:25.859 "min_latency_us": 622.592, 00:29:25.859 "max_latency_us": 5190.4512 00:29:25.859 } 00:29:25.859 ], 00:29:25.859 "core_count": 1 00:29:25.859 } 00:29:25.860 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:25.860 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:25.860 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.860 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.860 | select(.opcode=="crc32c") 00:29:25.860 | "\(.module_name) \(.executed)"' 00:29:25.860 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 646957 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 646957 ']' 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 646957 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 646957 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 646957' 00:29:26.119 killing process with pid 646957 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 646957 00:29:26.119 Received shutdown signal, test time was about 2.000000 seconds 00:29:26.119 00:29:26.119 Latency(us) 00:29:26.119 [2024-12-09T04:24:08.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.119 [2024-12-09T04:24:08.589Z] =================================================================================================================== 00:29:26.119 [2024-12-09T04:24:08.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.119 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 646957 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=647601 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 647601 /var/tmp/bperf.sock 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 647601 ']' 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.379 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.379 [2024-12-09 05:24:08.750327] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:26.379 [2024-12-09 05:24:08.750381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647601 ] 00:29:26.379 [2024-12-09 05:24:08.824324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.639 [2024-12-09 05:24:08.865047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.639 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.639 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:26.639 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:26.639 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:26.639 05:24:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:26.898 05:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.898 05:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.157 nvme0n1 00:29:27.157 05:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:27.157 05:24:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.157 Running I/O for 2 seconds... 00:29:29.485 27045.00 IOPS, 105.64 MiB/s [2024-12-09T04:24:11.955Z] 27190.50 IOPS, 106.21 MiB/s 00:29:29.485 Latency(us) 00:29:29.485 [2024-12-09T04:24:11.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.485 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.485 nvme0n1 : 2.01 27193.15 106.22 0.00 0.00 4698.91 1913.65 6396.31 00:29:29.485 [2024-12-09T04:24:11.955Z] =================================================================================================================== 00:29:29.485 [2024-12-09T04:24:11.955Z] Total : 27193.15 106.22 0.00 0.00 4698.91 1913.65 6396.31 00:29:29.485 { 00:29:29.485 "results": [ 00:29:29.485 { 00:29:29.485 "job": "nvme0n1", 00:29:29.485 "core_mask": "0x2", 00:29:29.485 "workload": "randwrite", 00:29:29.485 "status": "finished", 00:29:29.485 "queue_depth": 128, 00:29:29.485 "io_size": 4096, 00:29:29.485 "runtime": 2.005689, 00:29:29.485 "iops": 27193.149087420832, 00:29:29.485 "mibps": 106.22323862273763, 00:29:29.485 "io_failed": 0, 00:29:29.485 "io_timeout": 0, 00:29:29.485 "avg_latency_us": 4698.9131761280505, 00:29:29.485 "min_latency_us": 1913.6512, 00:29:29.485 "max_latency_us": 6396.3136 00:29:29.485 } 00:29:29.485 ], 00:29:29.485 "core_count": 1 00:29:29.485 } 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:29.485 | select(.opcode=="crc32c") 00:29:29.485 | "\(.module_name) \(.executed)"' 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 647601 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 647601 ']' 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 647601 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 647601 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 647601' 00:29:29.485 killing process with pid 647601 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 647601 00:29:29.485 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.485 00:29:29.485 Latency(us) 00:29:29.485 [2024-12-09T04:24:11.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.485 [2024-12-09T04:24:11.955Z] =================================================================================================================== 00:29:29.485 [2024-12-09T04:24:11.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.485 05:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 647601 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=648142 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 648142 /var/tmp/bperf.sock 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 648142 ']' 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.743 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.743 [2024-12-09 05:24:12.057284] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:29.743 [2024-12-09 05:24:12.057333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648142 ] 00:29:29.743 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.743 Zero copy mechanism will not be used. 00:29:29.743 [2024-12-09 05:24:12.148369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.743 [2024-12-09 05:24:12.182926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.679 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.679 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:30.679 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:30.679 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:30.679 05:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:30.938 05:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.938 05:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.198 nvme0n1 00:29:31.198 05:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:31.198 05:24:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.198 Zero copy mechanism will not be used. 00:29:31.198 Running I/O for 2 seconds... 00:29:33.519 5995.00 IOPS, 749.38 MiB/s [2024-12-09T04:24:15.989Z] 6388.00 IOPS, 798.50 MiB/s 00:29:33.519 Latency(us) 00:29:33.519 [2024-12-09T04:24:15.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.519 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:33.519 nvme0n1 : 2.00 6387.81 798.48 0.00 0.00 2501.08 1599.08 11377.05 00:29:33.519 [2024-12-09T04:24:15.989Z] =================================================================================================================== 00:29:33.519 [2024-12-09T04:24:15.989Z] Total : 6387.81 798.48 0.00 0.00 2501.08 1599.08 11377.05 00:29:33.519 { 00:29:33.519 "results": [ 00:29:33.519 { 00:29:33.519 "job": "nvme0n1", 00:29:33.519 "core_mask": "0x2", 00:29:33.519 "workload": "randwrite", 00:29:33.519 "status": "finished", 00:29:33.519 "queue_depth": 16, 00:29:33.519 "io_size": 131072, 00:29:33.519 "runtime": 2.003192, 00:29:33.519 "iops": 6387.805063119262, 00:29:33.519 "mibps": 798.4756328899077, 00:29:33.519 "io_failed": 0, 00:29:33.519 "io_timeout": 0, 00:29:33.519 "avg_latency_us": 2501.0846929665518, 00:29:33.519 "min_latency_us": 1599.0784, 00:29:33.519 "max_latency_us": 11377.0496 00:29:33.519 } 00:29:33.519 ], 00:29:33.519 "core_count": 1 00:29:33.519 } 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.519 | select(.opcode=="crc32c") 00:29:33.519 | "\(.module_name) \(.executed)"' 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 648142 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 648142 ']' 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 648142 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648142 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648142' 00:29:33.519 killing process with pid 648142 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 648142 00:29:33.519 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.519 00:29:33.519 Latency(us) 00:29:33.519 [2024-12-09T04:24:15.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.519 [2024-12-09T04:24:15.989Z] =================================================================================================================== 00:29:33.519 [2024-12-09T04:24:15.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.519 05:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 648142 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 645732 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 645732 ']' 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 645732 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645732 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645732' 00:29:33.779 killing process with pid 645732 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 645732 00:29:33.779 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 645732 00:29:34.039 00:29:34.039 real 0m15.563s 00:29:34.039 user 0m28.875s 00:29:34.039 sys 0m5.328s 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:34.039 ************************************ 00:29:34.039 END TEST nvmf_digest_clean 00:29:34.039 ************************************ 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:34.039 ************************************ 00:29:34.039 START TEST nvmf_digest_error 00:29:34.039 ************************************ 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=648968 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 648968 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 648968 ']' 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.039 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.299 [2024-12-09 05:24:16.553295] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:34.299 [2024-12-09 05:24:16.553338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.299 [2024-12-09 05:24:16.632810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.299 [2024-12-09 05:24:16.670924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.299 [2024-12-09 05:24:16.670963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.299 [2024-12-09 05:24:16.670973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.299 [2024-12-09 05:24:16.670981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.299 [2024-12-09 05:24:16.670989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.299 [2024-12-09 05:24:16.671607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.299 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.299 [2024-12-09 05:24:16.764141] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.558 null0 00:29:34.558 [2024-12-09 05:24:16.860321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.558 [2024-12-09 05:24:16.884519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=648998 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 648998 /var/tmp/bperf.sock 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 648998 ']' 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.558 05:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.558 [2024-12-09 05:24:16.936819] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:34.558 [2024-12-09 05:24:16.936864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648998 ] 00:29:34.817 [2024-12-09 05:24:17.028219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.817 [2024-12-09 05:24:17.068428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.382 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.382 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:35.382 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.382 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.640 05:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.899 nvme0n1 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:35.899 05:24:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.157 Running I/O for 2 seconds... 00:29:36.157 [2024-12-09 05:24:18.467476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.157 [2024-12-09 05:24:18.467512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-12-09 05:24:18.467524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.157 [2024-12-09 05:24:18.478719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.157 [2024-12-09 05:24:18.478746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.157 [2024-12-09 05:24:18.478758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.487992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.488016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.488027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.496051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.496073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.496084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.505401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.505423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.505434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.515148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.515170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.515181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.523051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.523076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.523087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.532627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.532650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.532661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.542530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.542552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.542562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.551609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.551630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.551640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.560368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.560389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.560399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.568990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.569011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.569021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.581048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.581069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.581079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.593369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.593390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.593401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.604369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.604391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.604402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.613915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.613936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.613946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.158 [2024-12-09 05:24:18.625061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.158 [2024-12-09 05:24:18.625082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.158 [2024-12-09 05:24:18.625092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.634513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.634536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.634547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.643230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.643251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.643262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.655172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.655194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.655204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.666749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.666772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.666782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.677443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.677464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.677474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.686808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.686829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.686840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.698584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.698605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.698618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.708014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.708036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.708046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.716051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.417 [2024-12-09 05:24:18.716072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.417 [2024-12-09 05:24:18.716082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.417 [2024-12-09 05:24:18.726581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.726602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.726612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.736066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.736099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.747533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.747555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.759654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.759675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.759685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.768219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.768240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.768250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.780659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.780680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.788994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.789019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.789030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.799179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.799200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.799215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.808716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.808737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.808747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.818538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.818560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.818571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.828384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.828416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.836897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.836918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.836928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.845965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.845988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.845998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.854943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.854964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.854974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.864178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.864201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.864217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.873397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.873419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.873429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.418 [2024-12-09 05:24:18.882086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.418 [2024-12-09 05:24:18.882109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.418 [2024-12-09 05:24:18.882119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.892260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.892283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.892295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.901195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.901224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.901235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.910538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.910562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.910572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.919638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.919661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.929160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.929181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.929192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.938145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.938167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.938178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.947127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.947149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.956567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.956589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.956600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.965583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.965604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.965615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.974824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.974846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.974856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.982763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.982784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.982794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:18.992407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:18.992428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:18.992438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.001460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.001491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.011532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.011565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.020815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.020837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.020847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.029569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.029590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.029601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.038684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.038706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.038716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.046745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.678 [2024-12-09 05:24:19.046768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.678 [2024-12-09 05:24:19.046778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.678 [2024-12-09 05:24:19.056650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.056672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.056683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.066555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.066577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.066587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.074681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.074702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.074713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.085420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.085442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.085452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.095227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.095248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.095259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.103570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.103594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.103608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.113348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.113370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.113380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.122349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.122369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.122380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.131824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.131847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.131858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.679 [2024-12-09 05:24:19.140277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.679 [2024-12-09 05:24:19.140304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.679 [2024-12-09 05:24:19.140315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.150360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.150382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.150393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.160371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.160392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.160403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.168976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.168997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.169007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.178651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.178672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.178683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.187718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.187744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.187755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.197571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.197592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.197602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.206641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.206662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.219580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.219601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.219612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.229101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.237647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.237670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.237681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.249130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.249153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.249164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.257700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.257723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.938 [2024-12-09 05:24:19.257734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.938 [2024-12-09 05:24:19.268414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.938 [2024-12-09 05:24:19.268436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.268446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.278100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.278122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.278133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.288573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.288595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.288605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.297606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.297628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.297638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.306102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.306124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.306135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.314745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.314766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.314777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.324529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.324551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.334984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.335006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.335017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.345112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.345134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.345145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.353532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.353568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.365675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.365697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.365707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.376862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.376884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.376894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.387450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.387472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.939 [2024-12-09 05:24:19.399403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:36.939 [2024-12-09 05:24:19.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.939 [2024-12-09 05:24:19.399435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.409348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.409370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.409381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.418004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.418025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.418036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.428711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.428743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.436842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.436864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.436875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.448956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.448982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.448992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 26117.00 IOPS, 102.02 MiB/s [2024-12-09T04:24:19.668Z] [2024-12-09 05:24:19.457332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.198 [2024-12-09 05:24:19.457354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.198 [2024-12-09 05:24:19.457364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.198 [2024-12-09 05:24:19.469113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.469137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.469147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.480400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.480421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.480432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.489822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.489843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.489854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.498267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.498289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.498299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.508323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.508344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.508355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.519021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.519042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.519052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.527100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.527120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.527130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.537114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.537135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.537145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.548532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.548554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.548565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.556484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.556505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.556515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.566838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.566859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.566870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.576175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.576195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.576205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.584490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.584510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.584521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.594671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.594692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.594702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.605702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.605724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.613469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.613493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.623324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.623345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.623354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.633895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.633916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.633926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.641282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.641303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.641312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.651760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.651781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.651792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.199 [2024-12-09 05:24:19.662095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.199 [2024-12-09 05:24:19.662115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.199 [2024-12-09 05:24:19.662125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.670674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.670695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.670705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.682942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.682962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.682973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.693149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.693170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.693180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.701787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.701808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.710866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.710887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.710897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.721066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.721086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.721096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.729065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.729086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.739803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.739824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.739834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.749886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.749908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.749918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.758763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.758784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.758794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.767797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.767818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.767828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.777945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.777965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.777979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.787793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.787813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.787823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.796569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.796590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.796600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.808249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.808269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.808279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.820000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.820021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.820031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.831591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.831612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.831622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.842846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.842867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.842881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.852286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.852307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.852317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.860913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.458 [2024-12-09 05:24:19.860934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.458 [2024-12-09 05:24:19.860944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.458 [2024-12-09 05:24:19.869956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.869980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.869990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.459 [2024-12-09 05:24:19.879636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.879659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.879669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.459 [2024-12-09 05:24:19.890265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.890287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.890298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.459 [2024-12-09 05:24:19.898952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.898973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.898983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.459 [2024-12-09 05:24:19.909232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.909253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.909264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.459 [2024-12-09 05:24:19.918216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.459 [2024-12-09 05:24:19.918237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.459 [2024-12-09 05:24:19.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.926601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.926624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.720 [2024-12-09 05:24:19.926634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.936405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.936427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.720 [2024-12-09 05:24:19.936437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.946422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.946444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.720 [2024-12-09 05:24:19.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.954529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.954551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.720 [2024-12-09 05:24:19.954562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.963910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.963931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.720 [2024-12-09 05:24:19.963942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.720 [2024-12-09 05:24:19.974362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.720 [2024-12-09 05:24:19.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:19.974394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:19.982971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:19.982992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:19.983002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:19.994830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:19.994851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:19.994861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.004473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.004495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.004506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.013365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.013386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.013397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.023530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.023552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.023563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.032787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.032808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.032823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.041831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.041853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.041864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.053908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.053930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.053941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.062342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.062363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.062374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.073578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.073600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.073610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.083747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.091949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.091971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.091982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.102180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.102202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.102219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.110860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.120533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.120562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.120573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.130164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.130185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.130195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.138954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.138975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.138986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.149167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.149188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.149199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.158427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.158460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.167415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.167437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.167448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.177438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.177459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.721 [2024-12-09 05:24:20.188269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.721 [2024-12-09 05:24:20.188291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.721 [2024-12-09 05:24:20.188301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.982 [2024-12-09 05:24:20.196023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.982 [2024-12-09 05:24:20.196045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.982 [2024-12-09 05:24:20.196056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.982 [2024-12-09 05:24:20.207234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.982 [2024-12-09 05:24:20.207255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.982 [2024-12-09 05:24:20.207282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.982 [2024-12-09 05:24:20.215319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.982 [2024-12-09 05:24:20.215340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.982 [2024-12-09 05:24:20.215350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.982 [2024-12-09 05:24:20.226647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.982 [2024-12-09 05:24:20.226668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.226679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.235941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.235962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.235972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.246369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.246391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.246402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.257385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.257406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.257417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.266281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.266302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.266313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.276166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.276188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.276198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.285977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.285998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.286013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.297162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.297183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.297194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.306988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.307010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.307020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.319085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.319107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.319118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.328794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.328815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.328826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.339519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.339541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.339551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.351757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.351779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.351789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.361768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.361789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.361799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.370425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.370457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.381416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.381437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.381448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.389884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.389905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.389915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.401152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.401188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.401199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.412917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.412938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.412948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.421115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.421137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.421148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.431156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.431178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.431188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.440872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.440892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.440902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.983 [2024-12-09 05:24:20.449171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:37.983 [2024-12-09 05:24:20.449192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.983 [2024-12-09 05:24:20.449203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.242 26072.00 IOPS, 101.84 MiB/s [2024-12-09T04:24:20.712Z] [2024-12-09 05:24:20.460775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x834d40) 00:29:38.242 [2024-12-09 05:24:20.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.242 [2024-12-09 05:24:20.460813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.242 00:29:38.242 Latency(us) 00:29:38.242 [2024-12-09T04:24:20.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.242 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:38.242 nvme0n1 : 2.05 25544.60 99.78 0.00 0.00 4907.84 2621.44 44879.05 00:29:38.242 [2024-12-09T04:24:20.712Z] =================================================================================================================== 00:29:38.242 [2024-12-09T04:24:20.712Z] Total : 25544.60 99.78 0.00 0.00 4907.84 2621.44 44879.05 00:29:38.242 { 00:29:38.242 "results": [ 00:29:38.242 { 00:29:38.242 "job": "nvme0n1", 00:29:38.242 "core_mask": "0x2", 00:29:38.242 "workload": "randread", 00:29:38.242 "status": "finished", 00:29:38.242 "queue_depth": 128, 00:29:38.242 "io_size": 4096, 00:29:38.242 "runtime": 2.046656, 00:29:38.242 "iops": 25544.595672159856, 00:29:38.242 "mibps": 99.78357684437444, 00:29:38.242 "io_failed": 0, 00:29:38.242 "io_timeout": 0, 00:29:38.242 "avg_latency_us": 4907.837870706375, 00:29:38.242 "min_latency_us": 2621.44, 00:29:38.242 "max_latency_us": 44879.0528 00:29:38.242 } 00:29:38.242 ], 00:29:38.242 "core_count": 1 00:29:38.242 } 00:29:38.242 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.242 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.242 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.242 | .driver_specific 00:29:38.242 | .nvme_error 00:29:38.242 | .status_code 00:29:38.242 | .command_transient_transport_error' 00:29:38.242 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 648998 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 648998 ']' 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 648998 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648998 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648998' 00:29:38.501 killing process with pid 648998 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 648998 00:29:38.501 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.501 00:29:38.501 Latency(us) 00:29:38.501 [2024-12-09T04:24:20.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.501 [2024-12-09T04:24:20.971Z] =================================================================================================================== 00:29:38.501 [2024-12-09T04:24:20.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.501 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 648998 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=649752 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 649752 /var/tmp/bperf.sock 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 649752 ']' 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.760 05:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.760 [2024-12-09 05:24:21.035462] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:38.760 [2024-12-09 05:24:21.035516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649752 ] 00:29:38.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.760 Zero copy mechanism will not be used. 00:29:38.760 [2024-12-09 05:24:21.127064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.760 [2024-12-09 05:24:21.166681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.698 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.698 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:39.698 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.698 05:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.698 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.957 nvme0n1 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:39.958 05:24:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.218 Zero copy mechanism will not be used. 00:29:40.218 Running I/O for 2 seconds... 00:29:40.218 [2024-12-09 05:24:22.442817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.442858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.442871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.448325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.448354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.448365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.453754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.453778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.453789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.459079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.459103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.459113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.464478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.464502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.464513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.469665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.469688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.469699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.474833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.474858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.474869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.479836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.479864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.479876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.484879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.484903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.484914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.489942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.489965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.489975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.495045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.495069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.495080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.500065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.500089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.500100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.505121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.505145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.505155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.510170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.510193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.510204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.515152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.515176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.515186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.520170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.520193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.520203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.525241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.525264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.525274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.530260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.530283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.530294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.535260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.535283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.535293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.540245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.545348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.545382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.550462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.550485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.550496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.555448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.555471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.555481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.560411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.560434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.560445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.565165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.565189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.565202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.570183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.570213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.570224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.575235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.575259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.575269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.580231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.580254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.580265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.585175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.585199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.218 [2024-12-09 05:24:22.585215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.218 [2024-12-09 05:24:22.590225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.218 [2024-12-09 05:24:22.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.590258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.595182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.595213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.595224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.600388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.600411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.600421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.605364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.605388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.605399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.610439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.610463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.610473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.615547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.615570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.615581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.620536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.620560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.620570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.625485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.625509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.625520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.630453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.630476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.630486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.635482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.635505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.635515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.640433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.640456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.640467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.643147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.643170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.643180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.648213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.648236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.648253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.653184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.653213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.653224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.658151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.658174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.663200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.663229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.663239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.668263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.668285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.668296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.673191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.673220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.673230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.678263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.678285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.678295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.219 [2024-12-09 05:24:22.683261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.219 [2024-12-09 05:24:22.683283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.219 [2024-12-09 05:24:22.683294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.688283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.688305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.688316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.693279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.693305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.693316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.698316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.698339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.698350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.703346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.703369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.703380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.708110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.708133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.708143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.712825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.712847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.712858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.479 [2024-12-09 05:24:22.717682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.479 [2024-12-09 05:24:22.717705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.479 [2024-12-09 05:24:22.717715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.722592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.722615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.722626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.727642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.727665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.727675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.732626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.732650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.732660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.737670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.737693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.737703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.742701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.742734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.747651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.747674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.747684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.752624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.752647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.752657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.757606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.757628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.757639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.762578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.762601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.762612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.767547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.767570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.767581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.772514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.772536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.777514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.777537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.777551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.782501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.782524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.782534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.787478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.787501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.787512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.792368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.792393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.792403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.797434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.797457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.797467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.802469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.802492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.802503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.807430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.807453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.807463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.812417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.812440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.812450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.817384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.817408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.817418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.822395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.822418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.822428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.827680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.827704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.827714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.833025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.833048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.833059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.838197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.838227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.838238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.843400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.843423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.848545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.848579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.480 [2024-12-09 05:24:22.853685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.480 [2024-12-09 05:24:22.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.480 [2024-12-09 05:24:22.853719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.858692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.858714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.858724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.863730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.863752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.863766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.868806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.868840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.873936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.873969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.879229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.884658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.884681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.884692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.890134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.890157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.890168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.895549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.895572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.895583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.900811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.900835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.900845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.906156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.906188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.911467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.911493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.911504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.916562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.916585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.916595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.921743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.921766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.921776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.926928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.926951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.926961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.932136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.932159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.932169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.937522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.937555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.481 [2024-12-09 05:24:22.942954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.481 [2024-12-09 05:24:22.942978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.481 [2024-12-09 05:24:22.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.948480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.948504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.948516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.954086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.954110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.959415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.959439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.959449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.964723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.964746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.964756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.970260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.970284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.970294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.975628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.975662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.980856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.980880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.980890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.986146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.986169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.986180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.991562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.991585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.991595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:22.996794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:22.996818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.742 [2024-12-09 05:24:22.996828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.742 [2024-12-09 05:24:23.002134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.742 [2024-12-09 05:24:23.002159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.007319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.007342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.007353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.012525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.012558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.017646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.017668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.022885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.022908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.022918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.027974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.027997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.028007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.033321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.033344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.033354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.038489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.038513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.038524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.043757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.043783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.043794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.049071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.049098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.049108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.054327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.054351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.054361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.059683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.059707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.059717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.065040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.065062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.070287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.070311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.070321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.075549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.075572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.075583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.080627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.080651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.080661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.085768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.085791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.085801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.090836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.090859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.090869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.096068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.096091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.096101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.101065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.101089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.101099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.105947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.105970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.105981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.111296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.111319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.111329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.116713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.116736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.116747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.121572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.121595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.121606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.124358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.124381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.124392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.129433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.129456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.129467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.135013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.135039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.743 [2024-12-09 05:24:23.135050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.743 [2024-12-09 05:24:23.140596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.743 [2024-12-09 05:24:23.140620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.145797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.145819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.150325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.150348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.150359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.155437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.155460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.155470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.160651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.160674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.160684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.165981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.166010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.166021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.171521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.171545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.171556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.176389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.176424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.181850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.181874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.181885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.187259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.187282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.187293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.192674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.192698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.192709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.198160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.198184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.198195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.203582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.203605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.203616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:40.744 [2024-12-09 05:24:23.208828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:40.744 [2024-12-09 05:24:23.208852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.744 [2024-12-09 05:24:23.208863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.064 [2024-12-09 05:24:23.214073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.064 [2024-12-09 05:24:23.214097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.064 [2024-12-09 05:24:23.214108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.064 [2024-12-09 05:24:23.219385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.064 [2024-12-09 05:24:23.219409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.219419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.225111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.225134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.225149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.230748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.230771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.230781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.236042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.236065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.236076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.241304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.241327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.241337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.246420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.246443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.246453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.251589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.251612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.251622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.256861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.256884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.256894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.262172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.262194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.262204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.267337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.267360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.267371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.272832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.272858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.272869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.278478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.278502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.278512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.284536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.284559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.284570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.290395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.290418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.290429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.295844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.295868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.295878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.301081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.301104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.301115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.306299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.306322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.306333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.311671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.311695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.311705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.316830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.316852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.316862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.322133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.322156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.322166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.327240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.327262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.327273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.332402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.332425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.332436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.337718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.337748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.337759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.343017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.343040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.343050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.348500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.348523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.065 [2024-12-09 05:24:23.348534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.065 [2024-12-09 05:24:23.354184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.065 [2024-12-09 05:24:23.354212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.359751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.359773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.359784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.362630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.362665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.367885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.367907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.367917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.373073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.373095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.373106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.378052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.378074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.378084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.382849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.382871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.382881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.387929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.387951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.387962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.393043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.393065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.393075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.398067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.398089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.398100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.403237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.403259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.403269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.408424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.408445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.408455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.413922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.413945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.419192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.419221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.424347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.424369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.424379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.429525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.429547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.429558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 5993.00 IOPS, 749.12 MiB/s [2024-12-09T04:24:23.536Z] [2024-12-09 05:24:23.435785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.435807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.435818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.440940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.440973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.446080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.446102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.446112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.451340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.451363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.451376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.457217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.457241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.457251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.464721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.464745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.464757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.470666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.470690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.470701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.478010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.478034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.478045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.483447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.483471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.483482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.488940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.488964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.488974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.494798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.066 [2024-12-09 05:24:23.494822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.066 [2024-12-09 05:24:23.494832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.066 [2024-12-09 05:24:23.500701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.500724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.500734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.067 [2024-12-09 05:24:23.506220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.506247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.506258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.067 [2024-12-09 05:24:23.511675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.511699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.511709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.067 [2024-12-09 05:24:23.516886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.516919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.067 [2024-12-09 05:24:23.522108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.522131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.522141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.067 [2024-12-09 05:24:23.527303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.067 [2024-12-09 05:24:23.527326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.067 [2024-12-09 05:24:23.527336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.532573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.532597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.532608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.538097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.538121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.538132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.543478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.543501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.543512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.549058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.549081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.549091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.554448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.554472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.554483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.559618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.559641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.559652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.564692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.564715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.564725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.569864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.569886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.569896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.575120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.575143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.575153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.580327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.580350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.580361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.585612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.585636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.585647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.590802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.590825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.590835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.595900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.595926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.595937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.601044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.601067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.601078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.606264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.606287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.606297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.611514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.611537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.611547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.616646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.616668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.616679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.621899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.621922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.621932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.627072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.627096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.627106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.632154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.632177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.327 [2024-12-09 05:24:23.632187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.327 [2024-12-09 05:24:23.637288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.327 [2024-12-09 05:24:23.637310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.637320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.642473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.642496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.648144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.648167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.648177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.653698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.653721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.653731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.659017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.659040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.659050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.664177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.664200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.664217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.669282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.669304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.669315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.674555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.674578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.674588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.679750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.679772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.679783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.684919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.684941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.684955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.690293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.690315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.695512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.695536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.695546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.700804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.700827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.700837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.706076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.706098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.706109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.711512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.711535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.711545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.716893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.716916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.716926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.722029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.722052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.722062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.727222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.727245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.727256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.732414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.732441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.732451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.737575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.737599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.737609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.742952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.742976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.742986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.748410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.748434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.748444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.753903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.753927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.753937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.759038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.759060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.759071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.764345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.764368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.764378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.769463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.769487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.769497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.774632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.774655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.328 [2024-12-09 05:24:23.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.328 [2024-12-09 05:24:23.779767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.328 [2024-12-09 05:24:23.779793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.329 [2024-12-09 05:24:23.779804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.329 [2024-12-09 05:24:23.784979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.329 [2024-12-09 05:24:23.785003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.329 [2024-12-09 05:24:23.785013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.329 [2024-12-09 05:24:23.790145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.329 [2024-12-09 05:24:23.790169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.329 [2024-12-09 05:24:23.790180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.795329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.795353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.795364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.800555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.800579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.800590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.805755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.805778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.805789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.810959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.810983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.810993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.816228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.816251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.816261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.821331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.821354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.821368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.826521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.589 [2024-12-09 05:24:23.826544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.589 [2024-12-09 05:24:23.826554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.589 [2024-12-09 05:24:23.831731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.831756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.831766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.836936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.836959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.836970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.842116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.842140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.847324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.847348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.847358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.852512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.852535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.852545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.857770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.857794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.857804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.862930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.862953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.862963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.868094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.868117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.868128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.873313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.873337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.873347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.878492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.878515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.878526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.884225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.884249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.884260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.890053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.890077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.890087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.895195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.895224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.895235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.900369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.900392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.900402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.905526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.905551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.905562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.910695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.910718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.910732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.916262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.916286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.916297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.921457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.921492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.926743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.926767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.590 [2024-12-09 05:24:23.931979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.590 [2024-12-09 05:24:23.932003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.590 [2024-12-09 05:24:23.932014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.937024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.937047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.937058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.942109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.942133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.942143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.947169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.947192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.947203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.952191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.952221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.952232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.957271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.957297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.957308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.962358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.962381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.962392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.967396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.967420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.967431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.972454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.972478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.972489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.977517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.977540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.977551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.982621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.982645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.982656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.987659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.987682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.987693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.992745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.992768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.992779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:23.997873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:23.997896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:23.997906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.002911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.002935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.002945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.007962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.007985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.007996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.013064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.013087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.013098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.018142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.018165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.018178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.023203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.023235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.023246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.028252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.028274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.028285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.033264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.033287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.033308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.038339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.038362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.591 [2024-12-09 05:24:24.038373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.591 [2024-12-09 05:24:24.043366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.591 [2024-12-09 05:24:24.043390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.592 [2024-12-09 05:24:24.043403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.592 [2024-12-09 05:24:24.048389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.592 [2024-12-09 05:24:24.048412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.592 [2024-12-09 05:24:24.048423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.592 [2024-12-09 05:24:24.053513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.592 [2024-12-09 05:24:24.053536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.592 [2024-12-09 05:24:24.053547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.852 [2024-12-09 05:24:24.058589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.852 [2024-12-09 05:24:24.058614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.852 [2024-12-09 05:24:24.058625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.852 [2024-12-09 05:24:24.063684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.852 [2024-12-09 05:24:24.063708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.852 [2024-12-09 05:24:24.063719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.852 [2024-12-09 05:24:24.068755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.068777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.068788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.073824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.073847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.073858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.078898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.078921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.078932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.083975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.083998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.084009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.089032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.089059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.089070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.094129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.094153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.094163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.099190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.099221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.099232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.104307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.104330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.104340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.109374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.109397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.109408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.114499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.114522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.114533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.119527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.119550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.119561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.124607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.124630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.124641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.129673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.129697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.129708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.134691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.134712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.134723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.139741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.139764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.139774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.144708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.144730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.144741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.149725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.149747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.149757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.154731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.154754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.154764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.159769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.159792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.159802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.164773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.164795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.164805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.169773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.169796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.169806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.174795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.174822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.174832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.179834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.179857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.179867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.184842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.184865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.184876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.189924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.189948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.189958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.194980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.195004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.195014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.853 [2024-12-09 05:24:24.200064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.853 [2024-12-09 05:24:24.200087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.853 [2024-12-09 05:24:24.200097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.205042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.205065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.205076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.210130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.210154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.210164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.214938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.214961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.214972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.220134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.220158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.220168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.225266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.225289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.225300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.230407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.230430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.230440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.235431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.235454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.235464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.240826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.240850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.240861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.246519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.246544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.246554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.251791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.251815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.251825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.256785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.256809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.256819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.261849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.261873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.261887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.266836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.266859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.266870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.272039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.272062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.272073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.277258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.277282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.277292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.282333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.282356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.282366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.287332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.287356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.287366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.292316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.292339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.292349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.297220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.297243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.297253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.302279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.302302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.302312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.307306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.307332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.307342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.312322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.312345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.312355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:41.854 [2024-12-09 05:24:24.317378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:41.854 [2024-12-09 05:24:24.317400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.854 [2024-12-09 05:24:24.317411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.322438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.322461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.322472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.327472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.327494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.332465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.332489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.332499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.337535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.337568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.342639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.342661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.342672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.347574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.347596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.347607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.350296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.350329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.355315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.355338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.355348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.360416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.360438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.360448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.365375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.365396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.365407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.370250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.370273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.370283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.375329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.375350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.375361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.380314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.380335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.380348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.385269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.385292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.385302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.390325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.390347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.390361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.395184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.395212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.395223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.400169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.400190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.400201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.405116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.405139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.405149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.410221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.410242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.415218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.415240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.420130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.420152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.420162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.425090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.425113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.425123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.429762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.429784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:42.113 [2024-12-09 05:24:24.434803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22ee850) 00:29:42.113 [2024-12-09 05:24:24.434826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.113 [2024-12-09 05:24:24.434836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:42.113 5998.00 IOPS, 749.75 MiB/s 00:29:42.113 Latency(us) 00:29:42.113 [2024-12-09T04:24:24.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:42.113 nvme0n1 : 2.00 5997.25 749.66 0.00 0.00 2665.43 625.87 10013.90 00:29:42.113 [2024-12-09T04:24:24.583Z] =================================================================================================================== 00:29:42.113 [2024-12-09T04:24:24.583Z] Total : 5997.25 749.66 0.00 0.00 2665.43 625.87 10013.90 00:29:42.113 { 00:29:42.113 "results": [ 00:29:42.113 { 00:29:42.113 "job": "nvme0n1", 00:29:42.113 "core_mask": "0x2", 00:29:42.113 "workload": "randread", 00:29:42.113 "status": "finished", 00:29:42.113 "queue_depth": 16, 00:29:42.113 "io_size": 131072, 00:29:42.113 "runtime": 2.002917, 00:29:42.113 "iops": 5997.253006490034, 00:29:42.113 "mibps": 749.6566258112542, 00:29:42.113 "io_failed": 0, 00:29:42.113 "io_timeout": 0, 00:29:42.113 "avg_latency_us": 2665.430330735931, 00:29:42.113 "min_latency_us": 625.8688, 00:29:42.113 "max_latency_us": 10013.9008 00:29:42.113 } 00:29:42.113 ], 00:29:42.113 "core_count": 1 00:29:42.113 } 00:29:42.113 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:42.113 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:42.114 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:42.114 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:42.114 | .driver_specific 00:29:42.114 | .nvme_error 00:29:42.114 | .status_code 00:29:42.114 | .command_transient_transport_error' 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 649752 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 649752 ']' 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 649752 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649752 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649752' 00:29:42.376 killing process with pid 649752 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 649752 00:29:42.376 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.376 00:29:42.376 Latency(us) 00:29:42.376 [2024-12-09T04:24:24.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.376 [2024-12-09T04:24:24.846Z] =================================================================================================================== 00:29:42.376 [2024-12-09T04:24:24.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.376 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 649752 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=650344 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 650344 /var/tmp/bperf.sock 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 650344 ']' 00:29:42.697 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.698 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.698 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.698 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.698 05:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.698 [2024-12-09 05:24:24.967667] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:42.698 [2024-12-09 05:24:24.967720] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650344 ] 00:29:42.698 [2024-12-09 05:24:25.060462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.698 [2024-12-09 05:24:25.101444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.410 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.410 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:43.410 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:43.410 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.694 05:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.954 nvme0n1 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:43.954 05:24:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.954 Running I/O for 2 seconds... 00:29:43.954 [2024-12-09 05:24:26.384828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef0bc0 00:29:43.954 [2024-12-09 05:24:26.385866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-12-09 05:24:26.385895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.954 [2024-12-09 05:24:26.393117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef9f68 00:29:43.954 [2024-12-09 05:24:26.393710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-12-09 05:24:26.393734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:43.954 [2024-12-09 05:24:26.401284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5220 00:29:43.954 [2024-12-09 05:24:26.401944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-12-09 05:24:26.401965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:43.954 [2024-12-09 05:24:26.410669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc128 00:29:43.954 [2024-12-09 05:24:26.411469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-12-09 05:24:26.411491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:43.954 [2024-12-09 05:24:26.419946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3d08 00:29:43.954 [2024-12-09 05:24:26.420834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:43.954 [2024-12-09 05:24:26.420855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.429291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee38d0 00:29:44.214 [2024-12-09 05:24:26.430292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.430313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.438550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede470 00:29:44.214 [2024-12-09 05:24:26.439693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.439714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.447811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc128 00:29:44.214 [2024-12-09 05:24:26.449066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.449087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.457116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea248 00:29:44.214 [2024-12-09 05:24:26.458514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.458535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.466591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:44.214 [2024-12-09 05:24:26.468071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.468092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.472821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee99d8 00:29:44.214 [2024-12-09 05:24:26.473487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.473507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.481251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:44.214 [2024-12-09 05:24:26.481908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.481928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.490486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef9f68 00:29:44.214 [2024-12-09 05:24:26.491273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.491294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.501253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3060 00:29:44.214 [2024-12-09 05:24:26.502409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.214 [2024-12-09 05:24:26.502429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.214 [2024-12-09 05:24:26.509601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb760 00:29:44.214 [2024-12-09 05:24:26.510733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.510753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.518868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efa3a0 00:29:44.215 [2024-12-09 05:24:26.520129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.528137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efd640 00:29:44.215 [2024-12-09 05:24:26.529514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.529533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.537061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee1710 00:29:44.215 [2024-12-09 05:24:26.538431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.538452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.543324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efe2e8 00:29:44.215 [2024-12-09 05:24:26.543971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.543990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.552519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eefae0 00:29:44.215 [2024-12-09 05:24:26.553262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.553288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.561442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee9e10 00:29:44.215 [2024-12-09 05:24:26.562220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.562240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.572102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee49b0 00:29:44.215 [2024-12-09 05:24:26.573240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.573261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.580470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc560 00:29:44.215 [2024-12-09 05:24:26.581609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.581629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.589681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee9168 00:29:44.215 [2024-12-09 05:24:26.590930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.590956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.598935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efeb58 00:29:44.215 [2024-12-09 05:24:26.600321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.608159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eecc78 00:29:44.215 [2024-12-09 05:24:26.609646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.609666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.614369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edfdc0 00:29:44.215 [2024-12-09 05:24:26.615033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.615053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.623557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efbcf0 00:29:44.215 [2024-12-09 05:24:26.624008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.624028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.632760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eec840 00:29:44.215 [2024-12-09 05:24:26.633333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.633353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.641974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee01f8 00:29:44.215 [2024-12-09 05:24:26.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.650575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1868 00:29:44.215 [2024-12-09 05:24:26.651828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.651848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.658186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef5be8 00:29:44.215 [2024-12-09 05:24:26.658845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.658865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.667442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee88f8 00:29:44.215 [2024-12-09 05:24:26.668213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.668239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:44.215 [2024-12-09 05:24:26.676692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eec840 00:29:44.215 [2024-12-09 05:24:26.677589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.215 [2024-12-09 05:24:26.677610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.685966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eec408 00:29:44.475 [2024-12-09 05:24:26.686965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.686986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.695254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee27f0 00:29:44.475 [2024-12-09 05:24:26.696377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.696397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.704451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee88f8 00:29:44.475 [2024-12-09 05:24:26.705609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.705630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.713216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee99d8 00:29:44.475 [2024-12-09 05:24:26.714450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.714470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.722183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6890 00:29:44.475 [2024-12-09 05:24:26.723424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.723459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.729699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef0788 00:29:44.475 [2024-12-09 05:24:26.730163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.730182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.738860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef35f0 00:29:44.475 [2024-12-09 05:24:26.739423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.739443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.748109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eec408 00:29:44.475 [2024-12-09 05:24:26.748784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.748806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.756412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed4e8 00:29:44.475 [2024-12-09 05:24:26.757618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.757639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.764022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee0630 00:29:44.475 [2024-12-09 05:24:26.764674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.764694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.773244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7970 00:29:44.475 [2024-12-09 05:24:26.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.475 [2024-12-09 05:24:26.774016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.475 [2024-12-09 05:24:26.782146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef35f0 00:29:44.475 [2024-12-09 05:24:26.782906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.782926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.791436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed4e8 00:29:44.476 [2024-12-09 05:24:26.792195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.792218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.799699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeaef0 00:29:44.476 [2024-12-09 05:24:26.800351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.800379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.808177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee49b0 00:29:44.476 [2024-12-09 05:24:26.808741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.808761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.817175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed0b0 00:29:44.476 [2024-12-09 05:24:26.817836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.817857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.827981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef2d80 00:29:44.476 [2024-12-09 05:24:26.829084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.829120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.834596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6cc8 00:29:44.476 [2024-12-09 05:24:26.835240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.835261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.843843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5a90 00:29:44.476 [2024-12-09 05:24:26.844602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.844622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.853116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:44.476 [2024-12-09 05:24:26.853999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.854018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.862382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef81e0 00:29:44.476 [2024-12-09 05:24:26.863403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.863423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.871619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1868 00:29:44.476 [2024-12-09 05:24:26.872728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.872748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.880834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5a90 00:29:44.476 [2024-12-09 05:24:26.882061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.882082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.890099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efbcf0 00:29:44.476 [2024-12-09 05:24:26.891480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.891501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.899346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee9e10 00:29:44.476 [2024-12-09 05:24:26.900848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.900868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.905761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef8618 00:29:44.476 [2024-12-09 05:24:26.906440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.906460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.914195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:44.476 [2024-12-09 05:24:26.914837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.914856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.923459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6fa8 00:29:44.476 [2024-12-09 05:24:26.924230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.924250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.934300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eef270 00:29:44.476 [2024-12-09 05:24:26.935427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.935448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.476 [2024-12-09 05:24:26.941523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeea00 00:29:44.476 [2024-12-09 05:24:26.942107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.476 [2024-12-09 05:24:26.942128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.950466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef5be8 00:29:44.736 [2024-12-09 05:24:26.951028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.951048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.959380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef92c0 00:29:44.736 [2024-12-09 05:24:26.959946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.959967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.969370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eddc00 00:29:44.736 [2024-12-09 05:24:26.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.970522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.978390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef2d80 00:29:44.736 [2024-12-09 05:24:26.979082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.979102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.986749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6458 00:29:44.736 [2024-12-09 05:24:26.987962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.987984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:26.994335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3060 00:29:44.736 [2024-12-09 05:24:26.994989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:26.995009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.003577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeaab8 00:29:44.736 [2024-12-09 05:24:27.004275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.004295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.014095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeaab8 00:29:44.736 [2024-12-09 05:24:27.015253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.015273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.021743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:44.736 [2024-12-09 05:24:27.022431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.022452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.030832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:44.736 [2024-12-09 05:24:27.031575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.031596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.039923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:44.736 [2024-12-09 05:24:27.040646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.040666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.048799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:44.736 [2024-12-09 05:24:27.049488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.049508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.057060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef8618 00:29:44.736 [2024-12-09 05:24:27.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.057847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.066329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf550 00:29:44.736 [2024-12-09 05:24:27.067222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.075354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb328 00:29:44.736 [2024-12-09 05:24:27.075800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.075821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.085543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede470 00:29:44.736 [2024-12-09 05:24:27.086579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.086600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.094337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef81e0 00:29:44.736 [2024-12-09 05:24:27.095439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.736 [2024-12-09 05:24:27.095459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:44.736 [2024-12-09 05:24:27.102647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb328 00:29:44.736 [2024-12-09 05:24:27.103673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.103693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.111739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eddc00 00:29:44.737 [2024-12-09 05:24:27.112755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.112776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.120111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6b70 00:29:44.737 [2024-12-09 05:24:27.121133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.121153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.129382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee4578 00:29:44.737 [2024-12-09 05:24:27.130501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.130522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.138616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeea00 00:29:44.737 [2024-12-09 05:24:27.139841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.139861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.147877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede470 00:29:44.737 [2024-12-09 05:24:27.149215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.149235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.157047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee01f8 00:29:44.737 [2024-12-09 05:24:27.158352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.158372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.163562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee7818 00:29:44.737 [2024-12-09 05:24:27.164261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.164281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.174417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:44.737 [2024-12-09 05:24:27.175600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.175621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.182635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eddc00 00:29:44.737 [2024-12-09 05:24:27.183542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.183563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.191347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eff3c8 00:29:44.737 [2024-12-09 05:24:27.192317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.192337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:44.737 [2024-12-09 05:24:27.201529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1ca0 00:29:44.737 [2024-12-09 05:24:27.202802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.737 [2024-12-09 05:24:27.202821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:44.997 [2024-12-09 05:24:27.210074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee73e0 00:29:44.997 [2024-12-09 05:24:27.211303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.997 [2024-12-09 05:24:27.211324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:44.997 [2024-12-09 05:24:27.218893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6890 00:29:44.997 [2024-12-09 05:24:27.219872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.219893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.228682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef2510 00:29:44.998 [2024-12-09 05:24:27.230189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.230214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.235021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf550 00:29:44.998 [2024-12-09 05:24:27.235861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.235881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.245782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb760 00:29:44.998 [2024-12-09 05:24:27.246953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.246975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.255387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede8a8 00:29:44.998 [2024-12-09 05:24:27.256815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.256836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.261877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf118 00:29:44.998 [2024-12-09 05:24:27.262602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.262622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.271134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea248 00:29:44.998 [2024-12-09 05:24:27.271966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.271987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.282720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5658 00:29:44.998 [2024-12-09 05:24:27.284169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.284188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.290359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1430 00:29:44.998 [2024-12-09 05:24:27.291349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.291373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.299037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6cc8 00:29:44.998 [2024-12-09 05:24:27.300213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.300234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.307203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee8088 00:29:44.998 [2024-12-09 05:24:27.307947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.307967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.316167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef2510 00:29:44.998 [2024-12-09 05:24:27.316792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.316812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.325431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc560 00:29:44.998 [2024-12-09 05:24:27.326177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.326197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.334310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5658 00:29:44.998 [2024-12-09 05:24:27.335291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.335311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.342519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee4de8 00:29:44.998 [2024-12-09 05:24:27.343776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.343796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.351839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef92c0 00:29:44.998 [2024-12-09 05:24:27.352615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.360353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf550 00:29:44.998 [2024-12-09 05:24:27.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.361272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.369216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef5378 00:29:44.998 [2024-12-09 05:24:27.370188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.370214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.377884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef92c0 00:29:44.998 [2024-12-09 05:24:27.379759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.379781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 28728.00 IOPS, 112.22 MiB/s [2024-12-09T04:24:27.468Z] [2024-12-09 05:24:27.386494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee9168 00:29:44.998 [2024-12-09 05:24:27.387213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.387233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.397298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea680 00:29:44.998 [2024-12-09 05:24:27.398368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.406287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6458 00:29:44.998 [2024-12-09 05:24:27.407297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.407319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.415534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea680 00:29:44.998 [2024-12-09 05:24:27.416769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.416790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.422662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7538 00:29:44.998 [2024-12-09 05:24:27.423525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.423545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.433377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4b08 00:29:44.998 [2024-12-09 05:24:27.434475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.434495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.442674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eff3c8 00:29:44.998 [2024-12-09 05:24:27.444002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.444021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:44.998 [2024-12-09 05:24:27.448970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee27f0 00:29:44.998 [2024-12-09 05:24:27.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.998 [2024-12-09 05:24:27.449623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:44.999 [2024-12-09 05:24:27.460419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee0a68 00:29:44.999 [2024-12-09 05:24:27.462035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:44.999 [2024-12-09 05:24:27.462056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.469940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed0b0 00:29:45.259 [2024-12-09 05:24:27.471516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.471536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.476373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1ca0 00:29:45.259 [2024-12-09 05:24:27.477236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.477256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.485276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee01f8 00:29:45.259 [2024-12-09 05:24:27.485722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.485743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.494396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efa7d8 00:29:45.259 [2024-12-09 05:24:27.495069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.495090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.503071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efa7d8 00:29:45.259 [2024-12-09 05:24:27.503752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.503772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.511866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:45.259 [2024-12-09 05:24:27.512660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.512680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.520659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:45.259 [2024-12-09 05:24:27.521428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.521452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.529427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:45.259 [2024-12-09 05:24:27.530175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.530195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.538258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:45.259 [2024-12-09 05:24:27.539020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.539040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.547037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3e60 00:29:45.259 [2024-12-09 05:24:27.547703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.547724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.555780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eef6a8 00:29:45.259 [2024-12-09 05:24:27.556522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.556542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.564851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef35f0 00:29:45.259 [2024-12-09 05:24:27.565392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.565412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.573964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6300 00:29:45.259 [2024-12-09 05:24:27.574822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.574842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.582813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef0788 00:29:45.259 [2024-12-09 05:24:27.583667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.583687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.591707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee84c0 00:29:45.259 [2024-12-09 05:24:27.592579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.592599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.600510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eec408 00:29:45.259 [2024-12-09 05:24:27.601402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.601422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.609385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7970 00:29:45.259 [2024-12-09 05:24:27.610239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.610258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.618231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5a90 00:29:45.259 [2024-12-09 05:24:27.619097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.619116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.259 [2024-12-09 05:24:27.627036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee01f8 00:29:45.259 [2024-12-09 05:24:27.627905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.259 [2024-12-09 05:24:27.627924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.636220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed920 00:29:45.260 [2024-12-09 05:24:27.636875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.636895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.645141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eed0b0 00:29:45.260 [2024-12-09 05:24:27.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.646123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.653983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee0630 00:29:45.260 [2024-12-09 05:24:27.654945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.662110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4f40 00:29:45.260 [2024-12-09 05:24:27.663437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.663457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.670614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3498 00:29:45.260 [2024-12-09 05:24:27.671251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.671271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.679482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee7c50 00:29:45.260 [2024-12-09 05:24:27.680127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.680147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.688350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efbcf0 00:29:45.260 [2024-12-09 05:24:27.688999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.689019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.697220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efe2e8 00:29:45.260 [2024-12-09 05:24:27.697861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.697881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.706076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee73e0 00:29:45.260 [2024-12-09 05:24:27.706723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.706742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.714904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3d08 00:29:45.260 [2024-12-09 05:24:27.715557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.715577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.260 [2024-12-09 05:24:27.723724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efef90 00:29:45.260 [2024-12-09 05:24:27.724356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.260 [2024-12-09 05:24:27.724376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.732650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef5378 00:29:45.520 [2024-12-09 05:24:27.733291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.733311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.741484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee12d8 00:29:45.520 [2024-12-09 05:24:27.742129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.742149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.750302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf988 00:29:45.520 [2024-12-09 05:24:27.750951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.750974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.759173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeaab8 00:29:45.520 [2024-12-09 05:24:27.759731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.759750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.768263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeff18 00:29:45.520 [2024-12-09 05:24:27.769000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.769020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.777071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee5658 00:29:45.520 [2024-12-09 05:24:27.777903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.777923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.786339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7970 00:29:45.520 [2024-12-09 05:24:27.787280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.787300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.795557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efda78 00:29:45.520 [2024-12-09 05:24:27.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.796638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.804823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee4de8 00:29:45.520 [2024-12-09 05:24:27.806013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.806033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.813013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efa7d8 00:29:45.520 [2024-12-09 05:24:27.813860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.813879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.821715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4f40 00:29:45.520 [2024-12-09 05:24:27.822569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.822588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.830823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efeb58 00:29:45.520 [2024-12-09 05:24:27.831586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.831606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.840943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede8a8 00:29:45.520 [2024-12-09 05:24:27.842356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.842375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.847183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef92c0 00:29:45.520 [2024-12-09 05:24:27.847805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.847824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.856099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:45.520 [2024-12-09 05:24:27.856812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.520 [2024-12-09 05:24:27.856832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:45.520 [2024-12-09 05:24:27.865895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea680 00:29:45.521 [2024-12-09 05:24:27.866785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.866804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.874759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016edf550 00:29:45.521 [2024-12-09 05:24:27.875630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.875650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.882981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6458 00:29:45.521 [2024-12-09 05:24:27.883828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.883848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.892035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede8a8 00:29:45.521 [2024-12-09 05:24:27.892467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.892487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.901316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7538 00:29:45.521 [2024-12-09 05:24:27.901858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.901879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.910524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea248 00:29:45.521 [2024-12-09 05:24:27.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.911220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.920705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee3d08 00:29:45.521 [2024-12-09 05:24:27.922166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.922186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.927115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eea248 00:29:45.521 [2024-12-09 05:24:27.927680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.927700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.936402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efe2e8 00:29:45.521 [2024-12-09 05:24:27.937141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.937162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.945263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:45.521 [2024-12-09 05:24:27.946115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.946135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.955095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6300 00:29:45.521 [2024-12-09 05:24:27.956085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.956105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.963902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef3a28 00:29:45.521 [2024-12-09 05:24:27.964831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.964851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.973069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee27f0 00:29:45.521 [2024-12-09 05:24:27.974144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:45.521 [2024-12-09 05:24:27.980282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee0630 00:29:45.521 [2024-12-09 05:24:27.980929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.521 [2024-12-09 05:24:27.980952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:27.989093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee4140 00:29:45.782 [2024-12-09 05:24:27.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:27.989772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:27.998045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6fa8 00:29:45.782 [2024-12-09 05:24:27.998689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:27.998710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.006817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef2d80 00:29:45.782 [2024-12-09 05:24:28.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.007487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.015654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eebb98 00:29:45.782 [2024-12-09 05:24:28.016284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.024501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efcdd0 00:29:45.782 [2024-12-09 05:24:28.025159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.025179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.033319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef5be8 00:29:45.782 [2024-12-09 05:24:28.033967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.033986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.042456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eddc00 00:29:45.782 [2024-12-09 05:24:28.042882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.042902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.051650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeee38 00:29:45.782 [2024-12-09 05:24:28.052312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.052332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.060635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7970 00:29:45.782 [2024-12-09 05:24:28.061508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.061528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.069508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efd640 00:29:45.782 [2024-12-09 05:24:28.070382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.070401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.078318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef8a50 00:29:45.782 [2024-12-09 05:24:28.079186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.079205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.087190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef46d0 00:29:45.782 [2024-12-09 05:24:28.088055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.096278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:45.782 [2024-12-09 05:24:28.096944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.096964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.106398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeaef0 00:29:45.782 [2024-12-09 05:24:28.107817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.107837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.112652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:45.782 [2024-12-09 05:24:28.113277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.113297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.121475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efe2e8 00:29:45.782 [2024-12-09 05:24:28.122232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.122252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.132254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef7970 00:29:45.782 [2024-12-09 05:24:28.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.133368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.782 [2024-12-09 05:24:28.141086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee84c0 00:29:45.782 [2024-12-09 05:24:28.142281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.782 [2024-12-09 05:24:28.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.150321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eff3c8 00:29:45.783 [2024-12-09 05:24:28.151629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.151649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.158525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef92c0 00:29:45.783 [2024-12-09 05:24:28.159530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.159550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.167284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee8088 00:29:45.783 [2024-12-09 05:24:28.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.168278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.176066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef20d8 00:29:45.783 [2024-12-09 05:24:28.177083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.177103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.184590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb328 00:29:45.783 [2024-12-09 05:24:28.185682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.185702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.193572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef20d8 00:29:45.783 [2024-12-09 05:24:28.194438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.194458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.202496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef0ff8 00:29:45.783 [2024-12-09 05:24:28.203379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.203399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.211370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1868 00:29:45.783 [2024-12-09 05:24:28.212266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.212290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.220201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eeb760 00:29:45.783 [2024-12-09 05:24:28.221085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.221105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.228997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee7818 00:29:45.783 [2024-12-09 05:24:28.229863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.229883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.237866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eddc00 00:29:45.783 [2024-12-09 05:24:28.238777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.238797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:45.783 [2024-12-09 05:24:28.246742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc998 00:29:45.783 [2024-12-09 05:24:28.247627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.783 [2024-12-09 05:24:28.247647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.255573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede470 00:29:46.043 [2024-12-09 05:24:28.256460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.256480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.264439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee1f80 00:29:46.043 [2024-12-09 05:24:28.265335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.265355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.273419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efb8b8 00:29:46.043 [2024-12-09 05:24:28.274295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.274316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.282256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef0788 00:29:46.043 [2024-12-09 05:24:28.283147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.283167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.291101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee1b48 00:29:46.043 [2024-12-09 05:24:28.292010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.292030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.299931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efeb58 00:29:46.043 [2024-12-09 05:24:28.300800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.043 [2024-12-09 05:24:28.300820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.043 [2024-12-09 05:24:28.308814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee4140 00:29:46.044 [2024-12-09 05:24:28.309716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.309736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.317653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee6fa8 00:29:46.044 [2024-12-09 05:24:28.318535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.318554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.325875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef4298 00:29:46.044 [2024-12-09 05:24:28.326746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.326765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.335153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016eecc78 00:29:46.044 [2024-12-09 05:24:28.336121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.336141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.343376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ee12d8 00:29:46.044 [2024-12-09 05:24:28.344033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.344053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.352052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef1ca0 00:29:46.044 [2024-12-09 05:24:28.352693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.352712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.360913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ede470 00:29:46.044 [2024-12-09 05:24:28.361547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.361567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.369733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016efc998 00:29:46.044 [2024-12-09 05:24:28.370392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.370412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:46.044 [2024-12-09 05:24:28.378834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd51b0) with pdu=0x200016ef6020 00:29:46.044 [2024-12-09 05:24:28.379374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.044 [2024-12-09 05:24:28.379393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.044 28806.00 IOPS, 112.52 MiB/s 00:29:46.044 Latency(us) 00:29:46.044 [2024-12-09T04:24:28.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.044 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.044 nvme0n1 : 2.00 28808.47 112.53 0.00 0.00 4437.08 1900.54 12058.62 00:29:46.044 [2024-12-09T04:24:28.514Z] =================================================================================================================== 00:29:46.044 [2024-12-09T04:24:28.514Z] Total : 28808.47 112.53 0.00 0.00 4437.08 1900.54 12058.62 00:29:46.044 { 00:29:46.044 "results": [ 00:29:46.044 { 00:29:46.044 "job": "nvme0n1", 00:29:46.044 "core_mask": "0x2", 00:29:46.044 "workload": "randwrite", 00:29:46.044 "status": "finished", 00:29:46.044 "queue_depth": 128, 00:29:46.044 "io_size": 4096, 00:29:46.044 "runtime": 2.004272, 00:29:46.044 "iops": 28808.46511850687, 00:29:46.044 "mibps": 112.53306686916746, 00:29:46.044 "io_failed": 0, 00:29:46.044 "io_timeout": 0, 00:29:46.044 "avg_latency_us": 4437.081396937998, 00:29:46.044 "min_latency_us": 1900.544, 00:29:46.044 "max_latency_us": 12058.624 00:29:46.044 } 00:29:46.044 ], 00:29:46.044 "core_count": 1 00:29:46.044 } 00:29:46.044 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:46.044 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:46.044 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:46.044 | .driver_specific 00:29:46.044 | .nvme_error 00:29:46.044 | .status_code 00:29:46.044 | .command_transient_transport_error' 00:29:46.044 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 650344 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 650344 ']' 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 650344 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650344 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650344' 00:29:46.304 killing process with pid 650344 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 650344 00:29:46.304 Received shutdown signal, test time was about 2.000000 seconds 00:29:46.304 00:29:46.304 Latency(us) 00:29:46.304 [2024-12-09T04:24:28.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.304 [2024-12-09T04:24:28.774Z] =================================================================================================================== 00:29:46.304 [2024-12-09T04:24:28.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.304 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 650344 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=651046 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 651046 /var/tmp/bperf.sock 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 651046 ']' 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.563 05:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.563 [2024-12-09 05:24:28.960063] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:29:46.563 [2024-12-09 05:24:28.960119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651046 ] 00:29:46.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:46.563 Zero copy mechanism will not be used. 00:29:46.823 [2024-12-09 05:24:29.053314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.823 [2024-12-09 05:24:29.093707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.391 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.391 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:47.391 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.391 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.651 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:47.651 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.651 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.651 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.651 05:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.651 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.910 nvme0n1 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:47.910 05:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:47.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:47.910 Zero copy mechanism will not be used. 00:29:47.910 Running I/O for 2 seconds... 00:29:47.910 [2024-12-09 05:24:30.369393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:47.910 [2024-12-09 05:24:30.369480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.910 [2024-12-09 05:24:30.369509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:47.910 [2024-12-09 05:24:30.375518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:47.910 [2024-12-09 05:24:30.375585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.910 [2024-12-09 05:24:30.375607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.380593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.380662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.380684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.385396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.385488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.390177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.390252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.390272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.395032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.395101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.395126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.399853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.399919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.399939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.404615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.404688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.404708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.409451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.409514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.414430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.414521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.414541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.420369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.420524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.420543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.426848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.427014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.427033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.433170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.433277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.433297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.438820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.438899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.444163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.444294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.444314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.449901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.449969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.449989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.455231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.455364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.455383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.461333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.461520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.461539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.467214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.467293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.467314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.472749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.472836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.472856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.478447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.478535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.478555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.484294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.171 [2024-12-09 05:24:30.484351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.171 [2024-12-09 05:24:30.484381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.171 [2024-12-09 05:24:30.490512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.490572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.490593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.496514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.496585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.496605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.501552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.501613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.501632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.506466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.506593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.506612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.511430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.511505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.516203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.516280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.516299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.520957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.521029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.521048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.525873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.525991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.526010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.530923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.531020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.531041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.536564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.536624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.536648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.541966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.542028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.542046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.547503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.547564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.547584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.552539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.552595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.552614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.557973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.558088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.558107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.563818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.563878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.563898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.569136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.569197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.569223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.574617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.574754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.574774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.580204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.580268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.580287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.585637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.585707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.585726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.590612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.590717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.590737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.596237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.596308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.596328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.603835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.603982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.604002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.610565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.610810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.610831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.618057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.625535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.625667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.625697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.172 [2024-12-09 05:24:30.633318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.172 [2024-12-09 05:24:30.633445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.172 [2024-12-09 05:24:30.633465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.641542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.641689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.641709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.649094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.649245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.649266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.656751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.656900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.656920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.664108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.664251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.664271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.671104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.671247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.671266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.677486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.677669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.677688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.684378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.684536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.684555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.691057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.691203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.691230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.697781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.697935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.697955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.704102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.704271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.710630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.710737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.710757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.715653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.715806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.715825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.721349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.721506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.721525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.726712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.726799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.726819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.732267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.732417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.732437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.737654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.737741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.737760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.743403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.743466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.743486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.748763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.748858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.753908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.753993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.759006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.759148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.759167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.764091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.764174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.764193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.769305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.433 [2024-12-09 05:24:30.769392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.433 [2024-12-09 05:24:30.769412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.433 [2024-12-09 05:24:30.774427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.774493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.774512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.779331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.779410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.779430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.784423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.784592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.784611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.789538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.789637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.789657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.794450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.794542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.794562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.799537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.799604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.799624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.804625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.804711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.804731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.809866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.809965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.809985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.815003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.815087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.815107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.820070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.820224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.820259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.825415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.825490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.825511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.830473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.830609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.830628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.835558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.835704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.835723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.840735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.840817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.840840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.845825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.845939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.845958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.850874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.851039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.851058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.856118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.856248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.856268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.861183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.861272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.861292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.866235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.866331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.866351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.871449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.871534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.871554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.876347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.876485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.876504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.881944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.882113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.882133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.888076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.888222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.888242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.434 [2024-12-09 05:24:30.894807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.434 [2024-12-09 05:24:30.894968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.434 [2024-12-09 05:24:30.894988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.694 [2024-12-09 05:24:30.901105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.694 [2024-12-09 05:24:30.901159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.694 [2024-12-09 05:24:30.901180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.694 [2024-12-09 05:24:30.907699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.694 [2024-12-09 05:24:30.907838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.694 [2024-12-09 05:24:30.907857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.694 [2024-12-09 05:24:30.914102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.694 [2024-12-09 05:24:30.914159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.694 [2024-12-09 05:24:30.914180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.921191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.921336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.921355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.928273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.928385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.928405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.934355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.934475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.934495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.941082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.941147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.941166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.947796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.947862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.947882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.954470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.954527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.954547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.960601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.960749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.960769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.967542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.967698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.974867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.974927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.974946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.980434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.980634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.980654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.985680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.985905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.985926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.991092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.991334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.991355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:30.995943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:30.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:30.996190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.000620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.000848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.000869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.005008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.005239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.005259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.009527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.009748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.009769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.014046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.014277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.014297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.018679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.018877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.018897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.023183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.023403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.023424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.027644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.027873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.027893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.032160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.032384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.032405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.037025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.037096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.037116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.041491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.041706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.041727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.045994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.046214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.046233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.050482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.050699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.050718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.055009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.055233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.055253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.059638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.059844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.059863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.064110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.064343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.064362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.068704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.068927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.068947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.073217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.073430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.073451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.077767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.077985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.078006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.082362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.082573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.086866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.087082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.087102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.091447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.091660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.091680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.095960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.096175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.096196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.100496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.100717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.100738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.105000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.105206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.105232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.109534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.109749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.109770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.114081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.114309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.114333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.118605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.118826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.123153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.123376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.123397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.127646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.127862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.127882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.132244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.132464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.132484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.136835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.137047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.137078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.141359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.141563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.141582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.145751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.145958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.695 [2024-12-09 05:24:31.145977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.695 [2024-12-09 05:24:31.150154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.695 [2024-12-09 05:24:31.150397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.696 [2024-12-09 05:24:31.150419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.696 [2024-12-09 05:24:31.154650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.696 [2024-12-09 05:24:31.154855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.696 [2024-12-09 05:24:31.154875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.696 [2024-12-09 05:24:31.159400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.696 [2024-12-09 05:24:31.159608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.696 [2024-12-09 05:24:31.159627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.163964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.956 [2024-12-09 05:24:31.164165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.956 [2024-12-09 05:24:31.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.168403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.956 [2024-12-09 05:24:31.168605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.956 [2024-12-09 05:24:31.168626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.173024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.956 [2024-12-09 05:24:31.173242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.956 [2024-12-09 05:24:31.173263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.177649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.956 [2024-12-09 05:24:31.177856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.956 [2024-12-09 05:24:31.177877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.182109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.956 [2024-12-09 05:24:31.182311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.956 [2024-12-09 05:24:31.182332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.956 [2024-12-09 05:24:31.186577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.186787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.186807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.191017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.191233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.191252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.195542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.195789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.195810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.200707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.200905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.200924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.205118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.205344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.205364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.209651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.209852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.209872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.214169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.214403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.214424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.218838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.219044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.219066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.223292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.223494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.223515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.228056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.228292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.228313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.233193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.233402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.233428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.237484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.237692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.237713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.241765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.241971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.241991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.245994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.246214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.246235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.250277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.250481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.250502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.254517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.254726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.254747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.258730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.258937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.258957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.262965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.263171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.263192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.267138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.267359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.267380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.271387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.271592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.271613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.275595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.275804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.275832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.279960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.280164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.284128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.284335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.284356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.288346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.288556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.288577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.292582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.292795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.292815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.296760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.296966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.296985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.300984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.301196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.301222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.305175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.305387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.305417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.309351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.309566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.309587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.313550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.313761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.313782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.317812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.318022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.318042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.322254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.322487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.322507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.327149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.327688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.327710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.332165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.332373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.332393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.336614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.336821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.336841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.340988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.341198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.341224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.345446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.345658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.345682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.349771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.349986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.350006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.354241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.354447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.354467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.358999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.359202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.359229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.363308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.363510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.363529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.367474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.367688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.367709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 5966.00 IOPS, 745.75 MiB/s [2024-12-09T04:24:31.427Z] [2024-12-09 05:24:31.372895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.373122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.377312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.377536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.377558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.381747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.381983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.382005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.386302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.386560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.957 [2024-12-09 05:24:31.390823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.957 [2024-12-09 05:24:31.391066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.957 [2024-12-09 05:24:31.391088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.395332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.395560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.395582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.399829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.400067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.400088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.404259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.404520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.408723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.408960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.408981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.413179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.413413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.413435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.417605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.417839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.417860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:48.958 [2024-12-09 05:24:31.422194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:48.958 [2024-12-09 05:24:31.422445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.958 [2024-12-09 05:24:31.422465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.426940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.427165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.432315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.432544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.432565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.437314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.437543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.437564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.442887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.443135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.448081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.448330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.448352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.453367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.453594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.453616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.458399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.458627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.458649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.463547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.218 [2024-12-09 05:24:31.463786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.218 [2024-12-09 05:24:31.463808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.218 [2024-12-09 05:24:31.469069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.469301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.469327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.474704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.474929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.474951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.479797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.480022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.480043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.484646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.484874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.484896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.489355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.489595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.489616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.494092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.494323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.494345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.498787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.499012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.499034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.503613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.503839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.503860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.508431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.508658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.508680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.513685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.513944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.518423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.518661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.518682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.523100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.523349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.523371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.527988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.528231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.533457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.533683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.533705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.538852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.539077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.539099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.544021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.544263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.544284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.548905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.549129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.549150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.553811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.554033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.554054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.559417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.559655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.559677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.564893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.565117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.565138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.570050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.570290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.574792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.575014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.575036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.580338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.580566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.580588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.586020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.586261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.586283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.591397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.591623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.591645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.596309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.596540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.596561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.601201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.601261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.607755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.607982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.608004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.612771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.612998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.617580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.617805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.617826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.622345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.622574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.622596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.627250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.627477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.627499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.632159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.632390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.632410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.637088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.637358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.641967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.646846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.647076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.647097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.651687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.651912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.651933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.656538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.656774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.656796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.661310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.661541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.661563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.666115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.666359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.666381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.670896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.671131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.671153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.675667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.675890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.675912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.680568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.680792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.680813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.219 [2024-12-09 05:24:31.685377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.219 [2024-12-09 05:24:31.685603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.219 [2024-12-09 05:24:31.685625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.479 [2024-12-09 05:24:31.690183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.479 [2024-12-09 05:24:31.690417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.479 [2024-12-09 05:24:31.690439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.694933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.695158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.695180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.699728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.699953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.699974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.704501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.704740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.704761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.709071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.709302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.709323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.713982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.714213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.714234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.719061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.719293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.719313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.724672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.724898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.729900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.730150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.734737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.734961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.734982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.739540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.739764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.739786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.744400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.744630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.744651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.748994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.749238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.749259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.753818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.754044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.754067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.758887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.759123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.759144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.764788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.765024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.765045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.769938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.770164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.774793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.775022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.775043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.779566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.779791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.779813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.784334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.784583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.789036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.789284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.789305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.793740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.793968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.793989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.798479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.798702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.798724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.803845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.804083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.804104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.809386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.809616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.815017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.815260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.815281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.819851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.480 [2024-12-09 05:24:31.820081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.480 [2024-12-09 05:24:31.820102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.480 [2024-12-09 05:24:31.825463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.825690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.825712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.831779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.832005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.832027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.838361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.838588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.838609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.846072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.846317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.846340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.853156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.853420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.860051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.860289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.860311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.866377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.866602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.866622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.873548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.873783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.881070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.881311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.881333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.886748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.886975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.886997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.892086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.892345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.897083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.897315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.897337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.902259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.902495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.902517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.906838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.907080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.907101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.911511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.911743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.911765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.917037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.917258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.917280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.923774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.924005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.924026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.930362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.930589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.930611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.936642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.936867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.936891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.481 [2024-12-09 05:24:31.943195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.481 [2024-12-09 05:24:31.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.481 [2024-12-09 05:24:31.943467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.949916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.950020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.950040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.957641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.957879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.957901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.963192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.963430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.963452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.967992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.968224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.968245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.972727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.972963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.972985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.977425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.977654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.977677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.982388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.742 [2024-12-09 05:24:31.982617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.742 [2024-12-09 05:24:31.982639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.742 [2024-12-09 05:24:31.987265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:31.987492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:31.987514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:31.992063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:31.992298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:31.992318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:31.996906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:31.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:31.997153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.001714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.001953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.001975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.006534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.006763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.006784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.011362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.011600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.011621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.016095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.016330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.021041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.021272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.021292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.025972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.026217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.026254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.030714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.030958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.030980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.035324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.035570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.035592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.039949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.040194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.040222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.044522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.044754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.044776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.049079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.049316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.053701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.053946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.053968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.058335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.058584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.063008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.063259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.063281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.067685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.067931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.067954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.072336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.072582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.072604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.076968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.077217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.077239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.081611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.081852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.081876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.086225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.086465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.086487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.090735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.090965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.090988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.095532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.743 [2024-12-09 05:24:32.095775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.743 [2024-12-09 05:24:32.095797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.743 [2024-12-09 05:24:32.100175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.100430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.100452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.104822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.105068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.105090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.109471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.109716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.114117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.114367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.114389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.118772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.119022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.123674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.123908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.123929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.129387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.129613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.129634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.135704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.135933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.135955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.142284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.142510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.142535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.148928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.149171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.149194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.154896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.155125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.155148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.159712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.159936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.159958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.164303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.164544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.164565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.168926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.169152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.169174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.173528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.173767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.173788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.178237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.178479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.178500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.182939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.183191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.187555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.187792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.187814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.192069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.192304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.192325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.196667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.196907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.201295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.201534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.201555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:49.744 [2024-12-09 05:24:32.205935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:49.744 [2024-12-09 05:24:32.206162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.744 [2024-12-09 05:24:32.206183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.005 [2024-12-09 05:24:32.210571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.005 [2024-12-09 05:24:32.210819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-09 05:24:32.210841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.005 [2024-12-09 05:24:32.215116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.005 [2024-12-09 05:24:32.215371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-09 05:24:32.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.005 [2024-12-09 05:24:32.219705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.005 [2024-12-09 05:24:32.219943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.005 [2024-12-09 05:24:32.219965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.005 [2024-12-09 05:24:32.224372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.224614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.224635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.228945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.229182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.229204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.233590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.233819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.233841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.238115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.238353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.238375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.242738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.242980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.243000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.247517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.247745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.247766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.253280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.253515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.259763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.260005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.260027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.265142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.265399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.265422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.271407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.271642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.271669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.278142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.278387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.278409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.284383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.284621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.290615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.290863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.290885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.296892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.297130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.297152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.303486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.303739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.303761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.309864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.310093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.310114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.316463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.316691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.316712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.323042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.323279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.323302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.329855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.330199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.330228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.335897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.336163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.336185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.341871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.342124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.342146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.348412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.348686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.348708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.354831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.355122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.360900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.361147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.361170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.365816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.366019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.366039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:50.006 [2024-12-09 05:24:32.370186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.006 [2024-12-09 05:24:32.370395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.006 [2024-12-09 05:24:32.370426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:50.006 5973.00 IOPS, 746.62 MiB/s [2024-12-09T04:24:32.476Z] [2024-12-09 05:24:32.375268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bd54f0) with pdu=0x200016eff3c8 00:29:50.007 [2024-12-09 05:24:32.375324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.007 [2024-12-09 05:24:32.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:50.007 00:29:50.007 Latency(us) 00:29:50.007 [2024-12-09T04:24:32.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.007 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:50.007 nvme0n1 : 2.00 5972.18 746.52 0.00 0.00 2674.87 1756.36 8545.89 00:29:50.007 [2024-12-09T04:24:32.477Z] =================================================================================================================== 00:29:50.007 [2024-12-09T04:24:32.477Z] Total : 5972.18 746.52 0.00 0.00 2674.87 1756.36 8545.89 00:29:50.007 { 00:29:50.007 "results": [ 00:29:50.007 { 00:29:50.007 "job": "nvme0n1", 00:29:50.007 "core_mask": "0x2", 00:29:50.007 "workload": "randwrite", 00:29:50.007 "status": "finished", 00:29:50.007 "queue_depth": 16, 00:29:50.007 "io_size": 131072, 00:29:50.007 "runtime": 2.003791, 00:29:50.007 "iops": 5972.1797333155, 00:29:50.007 "mibps": 746.5224666644375, 00:29:50.007 "io_failed": 0, 00:29:50.007 "io_timeout": 0, 00:29:50.007 "avg_latency_us": 2674.8671465195953, 00:29:50.007 "min_latency_us": 1756.3648, 00:29:50.007 "max_latency_us": 8545.8944 00:29:50.007 } 00:29:50.007 ], 00:29:50.007 "core_count": 1 00:29:50.007 } 00:29:50.007 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:50.007 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:50.007 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:50.007 | .driver_specific 00:29:50.007 | .nvme_error 00:29:50.007 | .status_code 00:29:50.007 | .command_transient_transport_error' 00:29:50.007 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 651046 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 651046 ']' 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 651046 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651046 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651046' 00:29:50.265 killing process with pid 651046 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 651046 00:29:50.265 Received shutdown signal, test time was about 2.000000 seconds 00:29:50.265 00:29:50.265 Latency(us) 00:29:50.265 [2024-12-09T04:24:32.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.265 [2024-12-09T04:24:32.735Z] =================================================================================================================== 00:29:50.265 [2024-12-09T04:24:32.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.265 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 651046 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 648968 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 648968 ']' 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 648968 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648968 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648968' 00:29:50.524 killing process with pid 648968 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 648968 00:29:50.524 05:24:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 648968 00:29:50.782 00:29:50.782 real 0m16.637s 00:29:50.782 user 0m32.040s 00:29:50.782 sys 0m5.252s 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.782 ************************************ 00:29:50.782 END TEST nvmf_digest_error 00:29:50.782 ************************************ 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.782 rmmod nvme_tcp 00:29:50.782 rmmod nvme_fabrics 00:29:50.782 rmmod nvme_keyring 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 648968 ']' 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 648968 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 648968 ']' 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 648968 00:29:50.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (648968) - No such process 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 648968 is not found' 00:29:50.782 Process with pid 648968 is not found 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.782 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.040 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.040 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.040 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.040 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.040 05:24:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.942 00:29:52.942 real 0m42.080s 00:29:52.942 user 1m3.088s 00:29:52.942 sys 0m16.351s 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:52.942 ************************************ 00:29:52.942 END TEST nvmf_digest 00:29:52.942 ************************************ 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.942 05:24:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.202 ************************************ 00:29:53.202 START TEST nvmf_bdevperf 00:29:53.202 ************************************ 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:53.202 * Looking for test storage... 00:29:53.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.202 --rc genhtml_branch_coverage=1 00:29:53.202 --rc genhtml_function_coverage=1 00:29:53.202 --rc genhtml_legend=1 00:29:53.202 --rc geninfo_all_blocks=1 00:29:53.202 --rc geninfo_unexecuted_blocks=1 00:29:53.202 00:29:53.202 ' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.202 --rc genhtml_branch_coverage=1 00:29:53.202 --rc genhtml_function_coverage=1 00:29:53.202 --rc genhtml_legend=1 00:29:53.202 --rc geninfo_all_blocks=1 00:29:53.202 --rc geninfo_unexecuted_blocks=1 00:29:53.202 00:29:53.202 ' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.202 --rc genhtml_branch_coverage=1 00:29:53.202 --rc genhtml_function_coverage=1 00:29:53.202 --rc genhtml_legend=1 00:29:53.202 --rc geninfo_all_blocks=1 00:29:53.202 --rc geninfo_unexecuted_blocks=1 00:29:53.202 00:29:53.202 ' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.202 --rc genhtml_branch_coverage=1 00:29:53.202 --rc genhtml_function_coverage=1 00:29:53.202 --rc genhtml_legend=1 00:29:53.202 --rc geninfo_all_blocks=1 00:29:53.202 --rc geninfo_unexecuted_blocks=1 00:29:53.202 00:29:53.202 ' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.202 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.203 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.462 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.462 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.462 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.462 05:24:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:01.590 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:01.590 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:01.590 Found net devices under 0000:af:00.0: cvl_0_0 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:01.590 Found net devices under 0000:af:00.1: cvl_0_1 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.590 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:30:01.591 00:30:01.591 --- 10.0.0.2 ping statistics --- 00:30:01.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.591 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:30:01.591 00:30:01.591 --- 10.0.0.1 ping statistics --- 00:30:01.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.591 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=655422 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 655422 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 655422 ']' 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.591 05:24:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 [2024-12-09 05:24:42.984836] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:01.591 [2024-12-09 05:24:42.984884] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.591 [2024-12-09 05:24:43.081216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.591 [2024-12-09 05:24:43.122331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.591 [2024-12-09 05:24:43.122366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.591 [2024-12-09 05:24:43.122376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.591 [2024-12-09 05:24:43.122385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.591 [2024-12-09 05:24:43.122392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.591 [2024-12-09 05:24:43.123936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.591 [2024-12-09 05:24:43.124043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.591 [2024-12-09 05:24:43.124044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 [2024-12-09 05:24:43.882700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 Malloc0 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.591 [2024-12-09 05:24:43.951273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:01.591 { 00:30:01.591 "params": { 00:30:01.591 "name": "Nvme$subsystem", 00:30:01.591 "trtype": "$TEST_TRANSPORT", 00:30:01.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.591 "adrfam": "ipv4", 00:30:01.591 "trsvcid": "$NVMF_PORT", 00:30:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.591 "hdgst": ${hdgst:-false}, 00:30:01.591 "ddgst": ${ddgst:-false} 00:30:01.591 }, 00:30:01.591 "method": "bdev_nvme_attach_controller" 00:30:01.591 } 00:30:01.591 EOF 00:30:01.591 )") 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:01.591 05:24:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:01.591 "params": { 00:30:01.591 "name": "Nvme1", 00:30:01.591 "trtype": "tcp", 00:30:01.591 "traddr": "10.0.0.2", 00:30:01.591 "adrfam": "ipv4", 00:30:01.591 "trsvcid": "4420", 00:30:01.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:01.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:01.591 "hdgst": false, 00:30:01.591 "ddgst": false 00:30:01.591 }, 00:30:01.591 "method": "bdev_nvme_attach_controller" 00:30:01.591 }' 00:30:01.591 [2024-12-09 05:24:44.007599] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:01.592 [2024-12-09 05:24:44.007644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655701 ] 00:30:01.850 [2024-12-09 05:24:44.102098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.850 [2024-12-09 05:24:44.141350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.109 Running I/O for 1 seconds... 00:30:03.047 11610.00 IOPS, 45.35 MiB/s 00:30:03.047 Latency(us) 00:30:03.047 [2024-12-09T04:24:45.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.047 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:03.047 Verification LBA range: start 0x0 length 0x4000 00:30:03.047 Nvme1n1 : 1.00 11687.13 45.65 0.00 0.00 10911.60 1422.13 14155.78 00:30:03.047 [2024-12-09T04:24:45.517Z] =================================================================================================================== 00:30:03.047 [2024-12-09T04:24:45.517Z] Total : 11687.13 45.65 0.00 0.00 10911.60 1422.13 14155.78 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=655967 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.306 { 00:30:03.306 "params": { 00:30:03.306 "name": "Nvme$subsystem", 00:30:03.306 "trtype": "$TEST_TRANSPORT", 00:30:03.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.306 "adrfam": "ipv4", 00:30:03.306 "trsvcid": "$NVMF_PORT", 00:30:03.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.306 "hdgst": ${hdgst:-false}, 00:30:03.306 "ddgst": ${ddgst:-false} 00:30:03.306 }, 00:30:03.306 "method": "bdev_nvme_attach_controller" 00:30:03.306 } 00:30:03.306 EOF 00:30:03.306 )") 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:03.306 05:24:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.306 "params": { 00:30:03.306 "name": "Nvme1", 00:30:03.306 "trtype": "tcp", 00:30:03.306 "traddr": "10.0.0.2", 00:30:03.306 "adrfam": "ipv4", 00:30:03.306 "trsvcid": "4420", 00:30:03.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.306 "hdgst": false, 00:30:03.306 "ddgst": false 00:30:03.306 }, 00:30:03.306 "method": "bdev_nvme_attach_controller" 00:30:03.306 }' 00:30:03.306 [2024-12-09 05:24:45.756394] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:03.306 [2024-12-09 05:24:45.756445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655967 ] 00:30:03.565 [2024-12-09 05:24:45.852745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.565 [2024-12-09 05:24:45.888917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.823 Running I/O for 15 seconds... 00:30:05.699 11370.00 IOPS, 44.41 MiB/s [2024-12-09T04:24:48.750Z] 11505.00 IOPS, 44.94 MiB/s [2024-12-09T04:24:48.750Z] 05:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 655422 00:30:06.280 05:24:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:06.280 [2024-12-09 05:24:48.723246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.280 [2024-12-09 05:24:48.723286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.723987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.723995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.280 [2024-12-09 05:24:48.724445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.280 [2024-12-09 05:24:48.724453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.724943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.724993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.281 [2024-12-09 05:24:48.725258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.281 [2024-12-09 05:24:48.725618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.281 [2024-12-09 05:24:48.725628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.282 [2024-12-09 05:24:48.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e35a0 is same with the state(6) to be set 00:30:06.282 [2024-12-09 05:24:48.725861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:06.282 [2024-12-09 05:24:48.725869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:06.282 [2024-12-09 05:24:48.725877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111296 len:8 PRP1 0x0 PRP2 0x0 00:30:06.282 [2024-12-09 05:24:48.725891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.282 [2024-12-09 05:24:48.725986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.725996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.282 [2024-12-09 05:24:48.726005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.726015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.282 [2024-12-09 05:24:48.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.726033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.282 [2024-12-09 05:24:48.726042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.282 [2024-12-09 05:24:48.726051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.282 [2024-12-09 05:24:48.728731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.282 [2024-12-09 05:24:48.728759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.282 [2024-12-09 05:24:48.729246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.282 [2024-12-09 05:24:48.729266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.282 [2024-12-09 05:24:48.729276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.282 [2024-12-09 05:24:48.729449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.282 [2024-12-09 05:24:48.729622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.282 [2024-12-09 05:24:48.729633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.282 [2024-12-09 05:24:48.729643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.282 [2024-12-09 05:24:48.729653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.282 [2024-12-09 05:24:48.741755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.282 [2024-12-09 05:24:48.742175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.282 [2024-12-09 05:24:48.742246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.282 [2024-12-09 05:24:48.742282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.282 [2024-12-09 05:24:48.742886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.282 [2024-12-09 05:24:48.743059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.282 [2024-12-09 05:24:48.743070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.282 [2024-12-09 05:24:48.743083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.282 [2024-12-09 05:24:48.743092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.541 [2024-12-09 05:24:48.754608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.755026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.755045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.755054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.755219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.755378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.755389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.755398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.755405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.767315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.767657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.767676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.767685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.767844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.768003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.768015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.768023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.768030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.780295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.780718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.780737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.780747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.780915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.781082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.781093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.781102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.781110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.793217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.793694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.793748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.793780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.794280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.794454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.794466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.794475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.794483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.805904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.806323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.806382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.806415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.807010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.807195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.807206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.807223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.807232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.818632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.818986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.818996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.819154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.819322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.819333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.819342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.819350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.831351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.831837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.831877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.832372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.832533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.832544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.832554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.832562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.844134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.844547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.844575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.844733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.844892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.844903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.844911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.542 [2024-12-09 05:24:48.844919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.542 [2024-12-09 05:24:48.856937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.542 [2024-12-09 05:24:48.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.542 [2024-12-09 05:24:48.857423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.542 [2024-12-09 05:24:48.857455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.542 [2024-12-09 05:24:48.858050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.542 [2024-12-09 05:24:48.858547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.542 [2024-12-09 05:24:48.858559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.542 [2024-12-09 05:24:48.858567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.858575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.869811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.870142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.870160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.870170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.870355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.870526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.870538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.870547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.870555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.882604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.883002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.883022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.883032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.883193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.883358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.883370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.883378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.883387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.895414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.895838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.895891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.895924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.896364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.896531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.896542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.896550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.896558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.908186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.908535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.908554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.908564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.908722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.908880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.908891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.908902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.908911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.920920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.921322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.921341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.921351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.921509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.921667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.921678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.921686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.921694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.933702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.934127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.934137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.934302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.934461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.934472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.934480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.934487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.946494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.946841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.946859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.946869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.947027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.947185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.947196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.947205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.947220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.959238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.959667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.959719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.959752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.960361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.960883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.960894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.960902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.960910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.972032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.972430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.972450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.972460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.972618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.972776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.543 [2024-12-09 05:24:48.972787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.543 [2024-12-09 05:24:48.972795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.543 [2024-12-09 05:24:48.972803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.543 [2024-12-09 05:24:48.984851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.543 [2024-12-09 05:24:48.985240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.543 [2024-12-09 05:24:48.985259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.543 [2024-12-09 05:24:48.985269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.543 [2024-12-09 05:24:48.985436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.543 [2024-12-09 05:24:48.985602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.544 [2024-12-09 05:24:48.985614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.544 [2024-12-09 05:24:48.985622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.544 [2024-12-09 05:24:48.985630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.544 [2024-12-09 05:24:48.997892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.544 [2024-12-09 05:24:48.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.544 [2024-12-09 05:24:48.998334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.544 [2024-12-09 05:24:48.998348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.544 [2024-12-09 05:24:48.998520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.544 [2024-12-09 05:24:48.998693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.544 [2024-12-09 05:24:48.998705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.544 [2024-12-09 05:24:48.998714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.544 [2024-12-09 05:24:48.998722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.803 [2024-12-09 05:24:49.010841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.803 [2024-12-09 05:24:49.011256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.803 [2024-12-09 05:24:49.011276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.803 [2024-12-09 05:24:49.011286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.803 [2024-12-09 05:24:49.011459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.803 [2024-12-09 05:24:49.011632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.803 [2024-12-09 05:24:49.011645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.803 [2024-12-09 05:24:49.011653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.803 [2024-12-09 05:24:49.011662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.803 [2024-12-09 05:24:49.023825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.803 [2024-12-09 05:24:49.024254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.803 [2024-12-09 05:24:49.024274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.803 [2024-12-09 05:24:49.024294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.803 [2024-12-09 05:24:49.024453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.803 [2024-12-09 05:24:49.024612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.803 [2024-12-09 05:24:49.024623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.803 [2024-12-09 05:24:49.024631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.803 [2024-12-09 05:24:49.024639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.803 [2024-12-09 05:24:49.036766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.803 [2024-12-09 05:24:49.037175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.803 [2024-12-09 05:24:49.037193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.803 [2024-12-09 05:24:49.037202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.803 [2024-12-09 05:24:49.037368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.803 [2024-12-09 05:24:49.037531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.037543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.037552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.037559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.049564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.049972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.050027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.050059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.050673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.050850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.050860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.050868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.050875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.062313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.062726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.062744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.062754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.062911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.063070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.063081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.063089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.063097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.075145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.075548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.075567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.075577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.075736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.075895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.075906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.075918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.075926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.087922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.088337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.088356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.088365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.088524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.088683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.088694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.088702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.088710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.100730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.101125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.101144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.101153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.101318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.101478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.101489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.101497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.101505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 10086.00 IOPS, 39.40 MiB/s [2024-12-09T04:24:49.274Z] [2024-12-09 05:24:49.113484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.113895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.113914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.113923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.114082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.114249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.114260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.114269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.114276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.126293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.126684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.126702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.126712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.126870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.127029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.127040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.127049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.127056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.139019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.139434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.139480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.139513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.140060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.140225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.140235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.140244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.140252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.804 [2024-12-09 05:24:49.151743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.804 [2024-12-09 05:24:49.152147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.804 [2024-12-09 05:24:49.152165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.804 [2024-12-09 05:24:49.152175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.804 [2024-12-09 05:24:49.152341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.804 [2024-12-09 05:24:49.152500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.804 [2024-12-09 05:24:49.152511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.804 [2024-12-09 05:24:49.152519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.804 [2024-12-09 05:24:49.152527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.164500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.164836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.164855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.164867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.165026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.165184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.165194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.165202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.165216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.177198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.177594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.177613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.177623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.177781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.177940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.177951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.177960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.177968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.189982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.190401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.190456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.190489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.191002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.191162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.191173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.191182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.191190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.202715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.203057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.203075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.203084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.203252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.203414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.203425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.203434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.203442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.215449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.215865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.215884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.215893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.216051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.216217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.216228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.216236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.216244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.228196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.228615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.228661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.228695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.229266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.229426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.229437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.229445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.229453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.240987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.241372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.241393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.241403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.241571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.241738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.241749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.241761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.241770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.253874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.254292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.254348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.254381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.254977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.255344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.255356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.255365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.255374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:06.805 [2024-12-09 05:24:49.266857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:06.805 [2024-12-09 05:24:49.267261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:06.805 [2024-12-09 05:24:49.267280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:06.805 [2024-12-09 05:24:49.267291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:06.805 [2024-12-09 05:24:49.267458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:06.805 [2024-12-09 05:24:49.267627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:06.805 [2024-12-09 05:24:49.267639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:06.805 [2024-12-09 05:24:49.267647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:06.805 [2024-12-09 05:24:49.267656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.279628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.280022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.280040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.280050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.280215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.280374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.280384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.280393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.280401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.292376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.292787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.292805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.292815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.292973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.293132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.293143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.293151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.293159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.305120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.305534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.305563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.305722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.305880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.305892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.305900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.305908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.317995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.318381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.318434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.318467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.319062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.319608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.319619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.319627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.319635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.330779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.331130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.331148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.331161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.331325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.331484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.331495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.331503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.331511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.343550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.343927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.343957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.343967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.344125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.344290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.344301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.344309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.344317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.356309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.356731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.356784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.065 [2024-12-09 05:24:49.356817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.065 [2024-12-09 05:24:49.357407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.065 [2024-12-09 05:24:49.357567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.065 [2024-12-09 05:24:49.357577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.065 [2024-12-09 05:24:49.357586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.065 [2024-12-09 05:24:49.357594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.065 [2024-12-09 05:24:49.369048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.065 [2024-12-09 05:24:49.369388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.065 [2024-12-09 05:24:49.369442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.369475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.369942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.370108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.370121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.370131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.370139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.381792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.382196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.382263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.382296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.382754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.382914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.382925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.382933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.382941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.394530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.394943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.394997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.395030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.395530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.395690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.395701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.395709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.395716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.407329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.407662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.407726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.407759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.408370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.408625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.408636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.408649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.408658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.420102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.420502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.420521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.420530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.420689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.420849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.420859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.420868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.420876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.432823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.433256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.433312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.433345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.433596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.433756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.433767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.433776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.433784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.445588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.446009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.446027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.446037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.446195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.446364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.446376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.446384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.446392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.458405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.458817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.458859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.458892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.459507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.459724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.459735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.459744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.459752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.471111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.471463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.471483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.471493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.471652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.471812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.471823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.471831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.471839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.483936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.484330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.066 [2024-12-09 05:24:49.484349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.066 [2024-12-09 05:24:49.484358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.066 [2024-12-09 05:24:49.484517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.066 [2024-12-09 05:24:49.484676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.066 [2024-12-09 05:24:49.484686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.066 [2024-12-09 05:24:49.484695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.066 [2024-12-09 05:24:49.484703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.066 [2024-12-09 05:24:49.496703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.066 [2024-12-09 05:24:49.497113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.067 [2024-12-09 05:24:49.497132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.067 [2024-12-09 05:24:49.497145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.067 [2024-12-09 05:24:49.497318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.067 [2024-12-09 05:24:49.497487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.067 [2024-12-09 05:24:49.497498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.067 [2024-12-09 05:24:49.497507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.067 [2024-12-09 05:24:49.497516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.067 [2024-12-09 05:24:49.509558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.067 [2024-12-09 05:24:49.509913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.067 [2024-12-09 05:24:49.509932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.067 [2024-12-09 05:24:49.509942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.067 [2024-12-09 05:24:49.510110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.067 [2024-12-09 05:24:49.510283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.067 [2024-12-09 05:24:49.510295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.067 [2024-12-09 05:24:49.510305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.067 [2024-12-09 05:24:49.510314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.067 [2024-12-09 05:24:49.522391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.067 [2024-12-09 05:24:49.522708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.067 [2024-12-09 05:24:49.522727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.067 [2024-12-09 05:24:49.522737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.067 [2024-12-09 05:24:49.522905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.067 [2024-12-09 05:24:49.523072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.067 [2024-12-09 05:24:49.523083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.067 [2024-12-09 05:24:49.523092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.067 [2024-12-09 05:24:49.523100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.535302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.535630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.326 [2024-12-09 05:24:49.535648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.326 [2024-12-09 05:24:49.535658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.326 [2024-12-09 05:24:49.535839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.326 [2024-12-09 05:24:49.536010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.326 [2024-12-09 05:24:49.536021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.326 [2024-12-09 05:24:49.536029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.326 [2024-12-09 05:24:49.536038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.547993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.548413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.326 [2024-12-09 05:24:49.548433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.326 [2024-12-09 05:24:49.548443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.326 [2024-12-09 05:24:49.548603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.326 [2024-12-09 05:24:49.548762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.326 [2024-12-09 05:24:49.548773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.326 [2024-12-09 05:24:49.548781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.326 [2024-12-09 05:24:49.548789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.560796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.561211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.326 [2024-12-09 05:24:49.561230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.326 [2024-12-09 05:24:49.561240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.326 [2024-12-09 05:24:49.561398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.326 [2024-12-09 05:24:49.561556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.326 [2024-12-09 05:24:49.561567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.326 [2024-12-09 05:24:49.561575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.326 [2024-12-09 05:24:49.561583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.573565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.573988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.326 [2024-12-09 05:24:49.574041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.326 [2024-12-09 05:24:49.574073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.326 [2024-12-09 05:24:49.574686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.326 [2024-12-09 05:24:49.575158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.326 [2024-12-09 05:24:49.575169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.326 [2024-12-09 05:24:49.575181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.326 [2024-12-09 05:24:49.575190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.586258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.586670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.326 [2024-12-09 05:24:49.586688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.326 [2024-12-09 05:24:49.586698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.326 [2024-12-09 05:24:49.586857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.326 [2024-12-09 05:24:49.587015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.326 [2024-12-09 05:24:49.587026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.326 [2024-12-09 05:24:49.587034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.326 [2024-12-09 05:24:49.587042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.326 [2024-12-09 05:24:49.599045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.326 [2024-12-09 05:24:49.599469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.599523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.599556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.600101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.600513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.600538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.600559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.600577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.613862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.614397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.614451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.614483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.615077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.615536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.615553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.615567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.615580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.626784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.627219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.627239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.627249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.627421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.627593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.627605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.627614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.627622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.639553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.639891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.639910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.639919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.640077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.640243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.640255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.640263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.640271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.652283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.652686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.652704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.652714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.652873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.653031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.653042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.653050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.653058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.665033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.665445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.665464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.665476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.665636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.665794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.665805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.665813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.665821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.677725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.678136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.678155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.678164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.678328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.678488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.678499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.678507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.678515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.690508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.690927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.690979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.691012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.691496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.691657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.691668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.691677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.691685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.703277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.703648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.703702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.703734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.704220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.704385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.704396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.704404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.704412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.715978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.716334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.716388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.716421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.717015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.717620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.717631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.717640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.717648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.728758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.729099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.729117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.729126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.729291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.729450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.729459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.729467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.729475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.741453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.741795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.741814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.741824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.741981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.742139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.742150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.742162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.742171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.754172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.754544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.754564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.754575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.754746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.754915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.754926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.754936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.754945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.767187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.767596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.767616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.767626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.767793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.767961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.767973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.767981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.767990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.780093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.780497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.780516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.780526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.780694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.780861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.780872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.780882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.780890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.327 [2024-12-09 05:24:49.793035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.327 [2024-12-09 05:24:49.793309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.327 [2024-12-09 05:24:49.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.327 [2024-12-09 05:24:49.793338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.327 [2024-12-09 05:24:49.793505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.327 [2024-12-09 05:24:49.793672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.327 [2024-12-09 05:24:49.793683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.327 [2024-12-09 05:24:49.793692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.327 [2024-12-09 05:24:49.793700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.587 [2024-12-09 05:24:49.805731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.587 [2024-12-09 05:24:49.806120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.587 [2024-12-09 05:24:49.806138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.587 [2024-12-09 05:24:49.806147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.587 [2024-12-09 05:24:49.806310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.587 [2024-12-09 05:24:49.806470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.587 [2024-12-09 05:24:49.806481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.587 [2024-12-09 05:24:49.806489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.587 [2024-12-09 05:24:49.806497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.587 [2024-12-09 05:24:49.818617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.587 [2024-12-09 05:24:49.818953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.587 [2024-12-09 05:24:49.818971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.587 [2024-12-09 05:24:49.818980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.587 [2024-12-09 05:24:49.819138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.587 [2024-12-09 05:24:49.819301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.587 [2024-12-09 05:24:49.819312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.587 [2024-12-09 05:24:49.819321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.587 [2024-12-09 05:24:49.819329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.587 [2024-12-09 05:24:49.831555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.587 [2024-12-09 05:24:49.831980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.587 [2024-12-09 05:24:49.831999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.587 [2024-12-09 05:24:49.832012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.587 [2024-12-09 05:24:49.832194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.587 [2024-12-09 05:24:49.832367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.832379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.832388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.832396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.844434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.844766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.844785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.844794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.844952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.845110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.845121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.845129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.845137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.857235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.857569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.857587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.857596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.857755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.857914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.857925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.857933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.857941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.869937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.870339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.870394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.870426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.870903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.871066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.871078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.871086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.871094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.882643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.882995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.883013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.883022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.883180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.883344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.883355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.883364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.883372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.895602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.895998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.896018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.896028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.896194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.896368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.896380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.896389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.896397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.908507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.908865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.908885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.908895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.909067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.909244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.909256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.909271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.909280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.921557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.921913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.921943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.922114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.922293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.922305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.922314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.922322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.934599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.934933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.934952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.934962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.935134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.935310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.935322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.935333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.935341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.947487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.947809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.947827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.947837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.947994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.948154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.588 [2024-12-09 05:24:49.948165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.588 [2024-12-09 05:24:49.948173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.588 [2024-12-09 05:24:49.948181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.588 [2024-12-09 05:24:49.960370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.588 [2024-12-09 05:24:49.960740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.588 [2024-12-09 05:24:49.960757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.588 [2024-12-09 05:24:49.960767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.588 [2024-12-09 05:24:49.960925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.588 [2024-12-09 05:24:49.961083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:49.961094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:49.961102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:49.961110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:49.973133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:49.973535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:49.973553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:49.973563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:49.973721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:49.973909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:49.973921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:49.973930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:49.973938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:49.985995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:49.986344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:49.986363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:49.986373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:49.986530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:49.986689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:49.986700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:49.986708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:49.986716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:49.998864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:49.999147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:49.999165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:49.999177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:49.999340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:49.999499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:49.999510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:49.999518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:49.999526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:50.011772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:50.012177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:50.012196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:50.012206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:50.012384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:50.012555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:50.012567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:50.012577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:50.012585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:50.024847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:50.025142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:50.025162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:50.025173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:50.025353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:50.025528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:50.025539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:50.025548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:50.025558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:50.037811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:50.038145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:50.038165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:50.038175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:50.038354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:50.038531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:50.038542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:50.038552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:50.038561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.589 [2024-12-09 05:24:50.050818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.589 [2024-12-09 05:24:50.051194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.589 [2024-12-09 05:24:50.051218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.589 [2024-12-09 05:24:50.051229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.589 [2024-12-09 05:24:50.051400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.589 [2024-12-09 05:24:50.051573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.589 [2024-12-09 05:24:50.051584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.589 [2024-12-09 05:24:50.051593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.589 [2024-12-09 05:24:50.051602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.848 [2024-12-09 05:24:50.063794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.848 [2024-12-09 05:24:50.064135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-12-09 05:24:50.064155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.848 [2024-12-09 05:24:50.064165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.848 [2024-12-09 05:24:50.064342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.848 [2024-12-09 05:24:50.064515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.848 [2024-12-09 05:24:50.064527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.848 [2024-12-09 05:24:50.064536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.848 [2024-12-09 05:24:50.064544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.848 [2024-12-09 05:24:50.076849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.848 [2024-12-09 05:24:50.077217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-12-09 05:24:50.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.848 [2024-12-09 05:24:50.077247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.848 [2024-12-09 05:24:50.077425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.848 [2024-12-09 05:24:50.077592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.848 [2024-12-09 05:24:50.077603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.848 [2024-12-09 05:24:50.077615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.848 [2024-12-09 05:24:50.077624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.848 [2024-12-09 05:24:50.089898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.848 [2024-12-09 05:24:50.090248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-12-09 05:24:50.090268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.848 [2024-12-09 05:24:50.090278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.848 [2024-12-09 05:24:50.090445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.090633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.090645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.090654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.090663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.102799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.103169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.103236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.103270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.103864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.104483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.104495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.104504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.104513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 7564.50 IOPS, 29.55 MiB/s [2024-12-09T04:24:50.319Z] [2024-12-09 05:24:50.115814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.116248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.116268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.116278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.116451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.116623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.116634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.116643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.116651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.128784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.129191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.129217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.129228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.129400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.129572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.129584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.129594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.129603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.141729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.142131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.142151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.142161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.142339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.142512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.142524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.142533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.142541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.154663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.155092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.155111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.155121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.155298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.155470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.155482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.155491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.155499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.167637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.167970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.167989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.168002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.168174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.168352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.168364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.168373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.168381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.180674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.181082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.181101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.181111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.181288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.181460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.181472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.181481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.181489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.193605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.194027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.194046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.194057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.194233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.194405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.194417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.194426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.194434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.206565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.206991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.207010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.207020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.207191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.207373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.207386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.207394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.207403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.219530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.219962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.219981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.219991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.220162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.220340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.220352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.220361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.220369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.232502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.232890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.232909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.232920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.233091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.233269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.233281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.233293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.233301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.245424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.245790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.245809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.245819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.245991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.246162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.246174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.246186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.246195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.258485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.258894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.258913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.258923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.259094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.259272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.259284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.259292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.259301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.271439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.271861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.271880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.271890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.272062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.272241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.272253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.272261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.272270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.284395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.284758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.284777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.284787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.284959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.849 [2024-12-09 05:24:50.285131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.849 [2024-12-09 05:24:50.285142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.849 [2024-12-09 05:24:50.285151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.849 [2024-12-09 05:24:50.285160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.849 [2024-12-09 05:24:50.297328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.849 [2024-12-09 05:24:50.297752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-12-09 05:24:50.297772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.849 [2024-12-09 05:24:50.297782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.849 [2024-12-09 05:24:50.297954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.850 [2024-12-09 05:24:50.298126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.850 [2024-12-09 05:24:50.298138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.850 [2024-12-09 05:24:50.298147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.850 [2024-12-09 05:24:50.298155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:07.850 [2024-12-09 05:24:50.310295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:07.850 [2024-12-09 05:24:50.310582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-12-09 05:24:50.310601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:07.850 [2024-12-09 05:24:50.310611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:07.850 [2024-12-09 05:24:50.310782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:07.850 [2024-12-09 05:24:50.310954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:07.850 [2024-12-09 05:24:50.310966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:07.850 [2024-12-09 05:24:50.310974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:07.850 [2024-12-09 05:24:50.310983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.323278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.323614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.323633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.323644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.323815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.323988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.324000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.324010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.324019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.336316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.336663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.336682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.336695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.336867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.337039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.337051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.337060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.337069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.349357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.349705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.349724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.349734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.349905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.350077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.350088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.350097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.350105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.362401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.362873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.362906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.363352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.363526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.363538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.363547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.363556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.375378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.375817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.375836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.375847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.376019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.376194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.376206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.376221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.376231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.388362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.388785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.388813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.388984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.389155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.389167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.389176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.389184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.401318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.401730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.401749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.401759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.401930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.402102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.402114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.402123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.402131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.414254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.414679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.414698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.414708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.414885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.415053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.415064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.415076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.415085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.427166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.427523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.427542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.427553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.427721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.427887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.427898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.427908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.109 [2024-12-09 05:24:50.427916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.109 [2024-12-09 05:24:50.440042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.109 [2024-12-09 05:24:50.440408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.109 [2024-12-09 05:24:50.440427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.109 [2024-12-09 05:24:50.440437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.109 [2024-12-09 05:24:50.440608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.109 [2024-12-09 05:24:50.440780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.109 [2024-12-09 05:24:50.440791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.109 [2024-12-09 05:24:50.440800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.440809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.453069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.453498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.453518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.453528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.453699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.453871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.453883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.453892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.453900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.466098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.466535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.466556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.466566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.466738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.466910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.466922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.466931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.466940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.478958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.479388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.479435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.479468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.480023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.480192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.480203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.480219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.480227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.493974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.494437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.494463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.494478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.494737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.494999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.495016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.495030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.495043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.506989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.507420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.507441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.507454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.507632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.507808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.507820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.507830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.507839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.519800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.520230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.520285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.520318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.520913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.521346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.521371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.521392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.521410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.534887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.535338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.535365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.535379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.535640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.535903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.535919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.535932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.535945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.547920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.548324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.548344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.548354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.548532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.548714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.548726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.548736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.548745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.560895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.561179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.561198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.561213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.561381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.561548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.561559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.561568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.561576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.110 [2024-12-09 05:24:50.573826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.110 [2024-12-09 05:24:50.574222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.110 [2024-12-09 05:24:50.574241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.110 [2024-12-09 05:24:50.574251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.110 [2024-12-09 05:24:50.574421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.110 [2024-12-09 05:24:50.574604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.110 [2024-12-09 05:24:50.574617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.110 [2024-12-09 05:24:50.574625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.110 [2024-12-09 05:24:50.574634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.586714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.587086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.587105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.587115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.587306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.587479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.587491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.587503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.587512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.599553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.599973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.599991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.600000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.600158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.600343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.600355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.600364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.600372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.612455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.612867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.612885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.612894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.613052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.613213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.613224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.613249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.613258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.625330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.625751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.625794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.625829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.626442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.626625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.626636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.626645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.626653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.638193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.638540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.638558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.638568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.638726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.638883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.638894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.638902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.638910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.651156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.651626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.651658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.652082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.652274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.652286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.652295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.652305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.664081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.664527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.664538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.664705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.664872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.664884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.664893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.664902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.676962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.677387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.677406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.677418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.677577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.413 [2024-12-09 05:24:50.677736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.413 [2024-12-09 05:24:50.677747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.413 [2024-12-09 05:24:50.677755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.413 [2024-12-09 05:24:50.677763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.413 [2024-12-09 05:24:50.689708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.413 [2024-12-09 05:24:50.690108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-12-09 05:24:50.690126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.413 [2024-12-09 05:24:50.690136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.413 [2024-12-09 05:24:50.690322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.690491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.690503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.690511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.690520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.702611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.703022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.703039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.703049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.703213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.703397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.703408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.703417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.703425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.715435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.715866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.715918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.715951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.716562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.716809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.716821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.716829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.716838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.728337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.728727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.728746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.728756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.728914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.729073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.729083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.729092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.729099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.741250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.741669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.741689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.741698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.741865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.742032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.742043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.742052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.742061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.754073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.754500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.754520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.754530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.754696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.754863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.754875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.754887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.754896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.766925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.767316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.767335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.767345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.767503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.767661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.767672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.767680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.767688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.779717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.780135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.780178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.780227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.780824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.781022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.781033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.781041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.781049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.792563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.792989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.793017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.793188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.793368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.793381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.793389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.793398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.805460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.805806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.805825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.805834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.806001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.414 [2024-12-09 05:24:50.806168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.414 [2024-12-09 05:24:50.806179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.414 [2024-12-09 05:24:50.806188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.414 [2024-12-09 05:24:50.806196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.414 [2024-12-09 05:24:50.818359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.414 [2024-12-09 05:24:50.818804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-12-09 05:24:50.818858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.414 [2024-12-09 05:24:50.818891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.414 [2024-12-09 05:24:50.819403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.415 [2024-12-09 05:24:50.819572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.415 [2024-12-09 05:24:50.819584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.415 [2024-12-09 05:24:50.819593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.415 [2024-12-09 05:24:50.819602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.415 [2024-12-09 05:24:50.831339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.415 [2024-12-09 05:24:50.831762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.415 [2024-12-09 05:24:50.831816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.415 [2024-12-09 05:24:50.831848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.415 [2024-12-09 05:24:50.832274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.415 [2024-12-09 05:24:50.832444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.415 [2024-12-09 05:24:50.832455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.415 [2024-12-09 05:24:50.832464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.415 [2024-12-09 05:24:50.832472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.415 [2024-12-09 05:24:50.844252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.415 [2024-12-09 05:24:50.844686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.415 [2024-12-09 05:24:50.844740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.415 [2024-12-09 05:24:50.844780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.415 [2024-12-09 05:24:50.845311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.415 [2024-12-09 05:24:50.845485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.415 [2024-12-09 05:24:50.845495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.415 [2024-12-09 05:24:50.845504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.415 [2024-12-09 05:24:50.845511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.415 [2024-12-09 05:24:50.857232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.415 [2024-12-09 05:24:50.857657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.415 [2024-12-09 05:24:50.857703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.415 [2024-12-09 05:24:50.857736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.415 [2024-12-09 05:24:50.858282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.415 [2024-12-09 05:24:50.858452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.415 [2024-12-09 05:24:50.858463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.415 [2024-12-09 05:24:50.858472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.415 [2024-12-09 05:24:50.858481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.415 [2024-12-09 05:24:50.870052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.415 [2024-12-09 05:24:50.870491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.415 [2024-12-09 05:24:50.870510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.415 [2024-12-09 05:24:50.870520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.415 [2024-12-09 05:24:50.870687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.415 [2024-12-09 05:24:50.870854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.415 [2024-12-09 05:24:50.870866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.415 [2024-12-09 05:24:50.870874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.415 [2024-12-09 05:24:50.870883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.674 [2024-12-09 05:24:50.883042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.674 [2024-12-09 05:24:50.883490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.674 [2024-12-09 05:24:50.883509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.674 [2024-12-09 05:24:50.883519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.674 [2024-12-09 05:24:50.883686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.674 [2024-12-09 05:24:50.883857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.674 [2024-12-09 05:24:50.883869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.674 [2024-12-09 05:24:50.883877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.674 [2024-12-09 05:24:50.883885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.895835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.896272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.896295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.896304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.896466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.896644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.896656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.896664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.896673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.908769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.909230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.909285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.909318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.909676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.909845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.909856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.909866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.909874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.921737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.922161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.922180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.922191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.922370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.922544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.922556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.922569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.922579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.934658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.935077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.935096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.935106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.935278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.935447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.935458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.935467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.935475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.947713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.948139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.948158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.948168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.948346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.948519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.948531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.948540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.948548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.960648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.961076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.961095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.961105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.961282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.961454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.961465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.961475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.961484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.973547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.973983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.974002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.974012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.974183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.974371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.974383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.974392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.974400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.986434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.986810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.986828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.986838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:50.987005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:50.987173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:50.987184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:50.987193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:50.987201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:50.999239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:50.999648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:50.999701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:50.999734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:51.000347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:51.000507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:51.000518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.675 [2024-12-09 05:24:51.000526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.675 [2024-12-09 05:24:51.000534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.675 [2024-12-09 05:24:51.011983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.675 [2024-12-09 05:24:51.012318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.675 [2024-12-09 05:24:51.012338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.675 [2024-12-09 05:24:51.012350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.675 [2024-12-09 05:24:51.012510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.675 [2024-12-09 05:24:51.012668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.675 [2024-12-09 05:24:51.012679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.012688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.012695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.024800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.025198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.025266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.025299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.025808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.025967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.025977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.025985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.025992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.037561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.037989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.037998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.038156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.038342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.038354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.038363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.038371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.050328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.050744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.050796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.050829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.051439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.052012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.052024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.052033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.052041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.063303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.063719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.063729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.063896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.064064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.064076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.064084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.064092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.076158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.076598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.076651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.076684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.077072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.077245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.077257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.077266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.077285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.088851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.089271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.089326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.089358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.089859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.090019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.090030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.090042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.090050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.101641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.102029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.102048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.102057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.102221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.102381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.102397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.102406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.102414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 6051.60 IOPS, 23.64 MiB/s [2024-12-09T04:24:51.146Z] [2024-12-09 05:24:51.114349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.676 [2024-12-09 05:24:51.114753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.676 [2024-12-09 05:24:51.114772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.676 [2024-12-09 05:24:51.114781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.676 [2024-12-09 05:24:51.114939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.676 [2024-12-09 05:24:51.115098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.676 [2024-12-09 05:24:51.115109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.676 [2024-12-09 05:24:51.115117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.676 [2024-12-09 05:24:51.115125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.676 [2024-12-09 05:24:51.127286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.677 [2024-12-09 05:24:51.127724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-12-09 05:24:51.127779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.677 [2024-12-09 05:24:51.127812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.677 [2024-12-09 05:24:51.128423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.677 [2024-12-09 05:24:51.128978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.677 [2024-12-09 05:24:51.128989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.677 [2024-12-09 05:24:51.128998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.677 [2024-12-09 05:24:51.129006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.677 [2024-12-09 05:24:51.140168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.677 [2024-12-09 05:24:51.140462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.677 [2024-12-09 05:24:51.140482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.677 [2024-12-09 05:24:51.140491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.677 [2024-12-09 05:24:51.140658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.677 [2024-12-09 05:24:51.140826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.677 [2024-12-09 05:24:51.140837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.677 [2024-12-09 05:24:51.140846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.677 [2024-12-09 05:24:51.140854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.950 [2024-12-09 05:24:51.152964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.950 [2024-12-09 05:24:51.153395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.950 [2024-12-09 05:24:51.153449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.950 [2024-12-09 05:24:51.153482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.950 [2024-12-09 05:24:51.154027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.950 [2024-12-09 05:24:51.154187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.950 [2024-12-09 05:24:51.154198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.950 [2024-12-09 05:24:51.154213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.950 [2024-12-09 05:24:51.154222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.950 [2024-12-09 05:24:51.165675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.950 [2024-12-09 05:24:51.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.950 [2024-12-09 05:24:51.166137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.950 [2024-12-09 05:24:51.166169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.950 [2024-12-09 05:24:51.166777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.950 [2024-12-09 05:24:51.167269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.950 [2024-12-09 05:24:51.167280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.950 [2024-12-09 05:24:51.167289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.950 [2024-12-09 05:24:51.167298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.950 [2024-12-09 05:24:51.178482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.950 [2024-12-09 05:24:51.178903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.950 [2024-12-09 05:24:51.178957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.950 [2024-12-09 05:24:51.179006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.950 [2024-12-09 05:24:51.179401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.950 [2024-12-09 05:24:51.179571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.950 [2024-12-09 05:24:51.179583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.950 [2024-12-09 05:24:51.179592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.950 [2024-12-09 05:24:51.179600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.950 [2024-12-09 05:24:51.191182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.950 [2024-12-09 05:24:51.191600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.950 [2024-12-09 05:24:51.191619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.950 [2024-12-09 05:24:51.191628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.950 [2024-12-09 05:24:51.191787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.950 [2024-12-09 05:24:51.191945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.950 [2024-12-09 05:24:51.191957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.950 [2024-12-09 05:24:51.191965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.950 [2024-12-09 05:24:51.191972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.950 [2024-12-09 05:24:51.203928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.950 [2024-12-09 05:24:51.204340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.950 [2024-12-09 05:24:51.204358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.950 [2024-12-09 05:24:51.204367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.950 [2024-12-09 05:24:51.204526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.950 [2024-12-09 05:24:51.204684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.950 [2024-12-09 05:24:51.204695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.950 [2024-12-09 05:24:51.204703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.204711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.216624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.217018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.217036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.217045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.217202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.217394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.217405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.217414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.217422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.229312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.229727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.229745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.229754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.229912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.230070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.230081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.230090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.230097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.242123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.242535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.242580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.242614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.243139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.243325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.243337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.243346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.243354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.254923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.255334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.255420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.256021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.256179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.256189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.256200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.256214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.267621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.268091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.268123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.268734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.269345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.269380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.269415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.269433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.282859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.283393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.283447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.283479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.284079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.284351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.284368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.284382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.284395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.295773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.296130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.296182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.296229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.296664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.296838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.296849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.296859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.296868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.308584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.309010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.309028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.309038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.309206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.309380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.309391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.309400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.309408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.321494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.321926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.321981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.322014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.322532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.322702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.322714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.322724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.322733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.334240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.334641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.334695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.334727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.335228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.335413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.335423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.335431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.335439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.346927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.347340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.347395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.347436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.348030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.348648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.348659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.348667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.348674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.359931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.360358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.360378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.360388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.360560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.360732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.360744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.360753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.360761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.372861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.373287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.373308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.373318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.373490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.373662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.373674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.373683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.373691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.385787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.386195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.386220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.386231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.386402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.386577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.386589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.386598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.386606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.398704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.399021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.399031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.399202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.951 [2024-12-09 05:24:51.399381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.951 [2024-12-09 05:24:51.399392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.951 [2024-12-09 05:24:51.399401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.951 [2024-12-09 05:24:51.399409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:08.951 [2024-12-09 05:24:51.411691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:08.951 [2024-12-09 05:24:51.412094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.951 [2024-12-09 05:24:51.412114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:08.951 [2024-12-09 05:24:51.412124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:08.951 [2024-12-09 05:24:51.412302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:08.952 [2024-12-09 05:24:51.412475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:08.952 [2024-12-09 05:24:51.412487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:08.952 [2024-12-09 05:24:51.412496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:08.952 [2024-12-09 05:24:51.412505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.212 [2024-12-09 05:24:51.424620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.212 [2024-12-09 05:24:51.425047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.212 [2024-12-09 05:24:51.425066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.212 [2024-12-09 05:24:51.425076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.212 [2024-12-09 05:24:51.425254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.212 [2024-12-09 05:24:51.425427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.212 [2024-12-09 05:24:51.425439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.212 [2024-12-09 05:24:51.425452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.212 [2024-12-09 05:24:51.425461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.212 [2024-12-09 05:24:51.437612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.212 [2024-12-09 05:24:51.438036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.212 [2024-12-09 05:24:51.438055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.212 [2024-12-09 05:24:51.438065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.212 [2024-12-09 05:24:51.438243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.212 [2024-12-09 05:24:51.438416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.212 [2024-12-09 05:24:51.438428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.212 [2024-12-09 05:24:51.438436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.212 [2024-12-09 05:24:51.438445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.212 [2024-12-09 05:24:51.450543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.212 [2024-12-09 05:24:51.450966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.212 [2024-12-09 05:24:51.450986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.212 [2024-12-09 05:24:51.450996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.212 [2024-12-09 05:24:51.451167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.212 [2024-12-09 05:24:51.451348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.212 [2024-12-09 05:24:51.451361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.212 [2024-12-09 05:24:51.451370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.212 [2024-12-09 05:24:51.451379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.212 [2024-12-09 05:24:51.463713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.212 [2024-12-09 05:24:51.464144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.212 [2024-12-09 05:24:51.464190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.212 [2024-12-09 05:24:51.464243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.212 [2024-12-09 05:24:51.464787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.212 [2024-12-09 05:24:51.464961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.212 [2024-12-09 05:24:51.464974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.212 [2024-12-09 05:24:51.464983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.212 [2024-12-09 05:24:51.464991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.212 [2024-12-09 05:24:51.476630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.212 [2024-12-09 05:24:51.477032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.212 [2024-12-09 05:24:51.477052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.212 [2024-12-09 05:24:51.477062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.212 [2024-12-09 05:24:51.477242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.212 [2024-12-09 05:24:51.477416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.212 [2024-12-09 05:24:51.477428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.477437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.477445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.489573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.489996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.490016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.490026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.490197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.490376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.490388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.490397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.490406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.502531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.502934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.502953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.502964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.503136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.503315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.503327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.503336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.503344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.515496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.515918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.515950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.516122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.516302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.516315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.516323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.516332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.528457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.528866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.528885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.528896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.529069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.529248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.529262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.529271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.529279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.541401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.541733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.541753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.541764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.541936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.542109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.542120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.542130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.542139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.554421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.554874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.554893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.554903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.555076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.555260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.555277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.555287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.555296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.567389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.567822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.567842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.567852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.568023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.568196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.568213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.568224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.568233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.580327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.580663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.580683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.580692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.580864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.581037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.581049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.581058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.581066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.593372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.593924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.593947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.593958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.594138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.213 [2024-12-09 05:24:51.594320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.213 [2024-12-09 05:24:51.594333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.213 [2024-12-09 05:24:51.594346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.213 [2024-12-09 05:24:51.594356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.213 [2024-12-09 05:24:51.606398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.213 [2024-12-09 05:24:51.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.213 [2024-12-09 05:24:51.606755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.213 [2024-12-09 05:24:51.606765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.213 [2024-12-09 05:24:51.606933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.607100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.607111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.607120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.607128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.214 [2024-12-09 05:24:51.619286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.214 [2024-12-09 05:24:51.619630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.214 [2024-12-09 05:24:51.619683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.214 [2024-12-09 05:24:51.619715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.214 [2024-12-09 05:24:51.620185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.620361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.620372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.620381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.620389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.214 [2024-12-09 05:24:51.632103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.214 [2024-12-09 05:24:51.632439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.214 [2024-12-09 05:24:51.632458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.214 [2024-12-09 05:24:51.632468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.214 [2024-12-09 05:24:51.632625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.632785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.632796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.632804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.632812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.214 [2024-12-09 05:24:51.644931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.214 [2024-12-09 05:24:51.645285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.214 [2024-12-09 05:24:51.645305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.214 [2024-12-09 05:24:51.645314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.214 [2024-12-09 05:24:51.645473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.645632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.645643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.645652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.645660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.214 [2024-12-09 05:24:51.657833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.214 [2024-12-09 05:24:51.658159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.214 [2024-12-09 05:24:51.658177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.214 [2024-12-09 05:24:51.658186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.214 [2024-12-09 05:24:51.658349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.658508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.658519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.658527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.658535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.214 [2024-12-09 05:24:51.670602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.214 [2024-12-09 05:24:51.670999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.214 [2024-12-09 05:24:51.671019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.214 [2024-12-09 05:24:51.671028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.214 [2024-12-09 05:24:51.671195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.214 [2024-12-09 05:24:51.671369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.214 [2024-12-09 05:24:51.671382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.214 [2024-12-09 05:24:51.671390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.214 [2024-12-09 05:24:51.671399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.474 [2024-12-09 05:24:51.683628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.474 [2024-12-09 05:24:51.683992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.474 [2024-12-09 05:24:51.684011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.474 [2024-12-09 05:24:51.684025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.474 [2024-12-09 05:24:51.684197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.474 [2024-12-09 05:24:51.684377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.474 [2024-12-09 05:24:51.684389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.474 [2024-12-09 05:24:51.684399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.474 [2024-12-09 05:24:51.684407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.474 [2024-12-09 05:24:51.696547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.474 [2024-12-09 05:24:51.696951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.474 [2024-12-09 05:24:51.696971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.474 [2024-12-09 05:24:51.696981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.474 [2024-12-09 05:24:51.697152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.474 [2024-12-09 05:24:51.697332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.474 [2024-12-09 05:24:51.697344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.474 [2024-12-09 05:24:51.697353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.474 [2024-12-09 05:24:51.697363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.474 [2024-12-09 05:24:51.709511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.474 [2024-12-09 05:24:51.709898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.474 [2024-12-09 05:24:51.709917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.474 [2024-12-09 05:24:51.709927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.474 [2024-12-09 05:24:51.710099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.474 [2024-12-09 05:24:51.710279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.474 [2024-12-09 05:24:51.710291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.474 [2024-12-09 05:24:51.710300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.474 [2024-12-09 05:24:51.710308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 655422 Killed "${NVMF_APP[@]}" "$@" 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.474 [2024-12-09 05:24:51.722568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.474 [2024-12-09 05:24:51.722980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.474 [2024-12-09 05:24:51.723000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.474 [2024-12-09 05:24:51.723010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.474 [2024-12-09 05:24:51.723182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.474 [2024-12-09 05:24:51.723360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.474 [2024-12-09 05:24:51.723373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.474 [2024-12-09 05:24:51.723381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.474 [2024-12-09 05:24:51.723390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=656930 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 656930 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 656930 ']' 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.474 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.475 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.475 05:24:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.475 [2024-12-09 05:24:51.735531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.735915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.735933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.735945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.736116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.736295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.736307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.736317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.736325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.748467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.748851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.748871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.748881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.749056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.749237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.749249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.749258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.749267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.761402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.761717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.761736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.761746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.761913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.762099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.762111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.762119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.762127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.774353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.774673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.774692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.774702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.774874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.775046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.775058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.775068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.775076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.781051] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:09.475 [2024-12-09 05:24:51.781096] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.475 [2024-12-09 05:24:51.787396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.787732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.787751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.787761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.787952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.788126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.788137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.788147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.788156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.800392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.800741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.800760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.800770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.800941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.801113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.801124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.801133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.801141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.813296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.813694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.813713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.813723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.813890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.814056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.814067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.814076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.814084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.826335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.826625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.826644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.826654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.826826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.826997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.827013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.827022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.827030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.839292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.839630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.839660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.475 [2024-12-09 05:24:51.839832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.475 [2024-12-09 05:24:51.840004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.475 [2024-12-09 05:24:51.840016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.475 [2024-12-09 05:24:51.840025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.475 [2024-12-09 05:24:51.840034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.475 [2024-12-09 05:24:51.852249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.475 [2024-12-09 05:24:51.852585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.475 [2024-12-09 05:24:51.852604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.475 [2024-12-09 05:24:51.852614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.852781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.852948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.852960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.852968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.852976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.865250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.865585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.865604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.865615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.865786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.865958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.865970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.865978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.865991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.878226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.878587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.878606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.878615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.878781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.878950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.878961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.878970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.878978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.879834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:09.476 [2024-12-09 05:24:51.891126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.891486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.891520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.891690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.891860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.891871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.891881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.891891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.904084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.904463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.904483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.904493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.904659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.904827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.904839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.904848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.904856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.916993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.917337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.917357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.917367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.917539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.917711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.917722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.917732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.917741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.476 [2024-12-09 05:24:51.921935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.476 [2024-12-09 05:24:51.921962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.476 [2024-12-09 05:24:51.921971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.476 [2024-12-09 05:24:51.921980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.476 [2024-12-09 05:24:51.921987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.476 [2024-12-09 05:24:51.923487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.476 [2024-12-09 05:24:51.923593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.476 [2024-12-09 05:24:51.923595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.476 [2024-12-09 05:24:51.930034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.476 [2024-12-09 05:24:51.930417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.476 [2024-12-09 05:24:51.930438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.476 [2024-12-09 05:24:51.930450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.476 [2024-12-09 05:24:51.930623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.476 [2024-12-09 05:24:51.930796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.476 [2024-12-09 05:24:51.930808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.476 [2024-12-09 05:24:51.930817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.476 [2024-12-09 05:24:51.930827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.735 [2024-12-09 05:24:51.943076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:51.943421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:51.943444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:51.943455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:51.943629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:51.943801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:51.943820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:51.943829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:51.943838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:51.956114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:51.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:51.956485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:51.956495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:51.956669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:51.956842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:51.956854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:51.956865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:51.956874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:51.969143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:51.969507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:51.969529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:51.969540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:51.969713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:51.969886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:51.969898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:51.969907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:51.969916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:51.982225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:51.982586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:51.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:51.982619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:51.982791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:51.982965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:51.982977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:51.982987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:51.983002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:51.995254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:51.995638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:51.995658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:51.995668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:51.995840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:51.996011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:51.996023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:51.996032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:51.996040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:52.008318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:52.008671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:52.008690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:52.008700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:52.008873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:52.009045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:52.009056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:52.009065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:52.009073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:52.021348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:52.021772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:52.021791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:52.021801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:52.021973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:52.022146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:52.022158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:52.022166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:52.022175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:52.034266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:52.034631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:52.034650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:52.034660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:52.034831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:52.035002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:52.035014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:52.035023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:52.035031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:52.047321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:52.047743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:52.047762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:52.047772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:52.047942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:52.048115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:52.048126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.736 [2024-12-09 05:24:52.048136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.736 [2024-12-09 05:24:52.048144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.736 [2024-12-09 05:24:52.060233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.736 [2024-12-09 05:24:52.060662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.736 [2024-12-09 05:24:52.060681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.736 [2024-12-09 05:24:52.060692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.736 [2024-12-09 05:24:52.060863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.736 [2024-12-09 05:24:52.061035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.736 [2024-12-09 05:24:52.061046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.061055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.061064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.073177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.073521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.073540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.073550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.073725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.073898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.073909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.073918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.073926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.086169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.086599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.086619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.086629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.086801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.086973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.086984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.086993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.087001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.099123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.099558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.099577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.099587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.099759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.099931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.099943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.099952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.099960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.112091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 5043.00 IOPS, 19.70 MiB/s [2024-12-09T04:24:52.207Z] [2024-12-09 05:24:52.113762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.113781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.113791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.113963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.114139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.114149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.114158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.114167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.125140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.125555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.125574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.125585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.125757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.125929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.125941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.125950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.125958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.138194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.138520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.138539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.138549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.138721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.138893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.138904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.138913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.138921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.151182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.151613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.151632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.151642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.151813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.151984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.151995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.152008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.152017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.164084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.164434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.164453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.164463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.164635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.164807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.164819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.164828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.164837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.177090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.177503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.177522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.177532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.177704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.177876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.737 [2024-12-09 05:24:52.177888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.737 [2024-12-09 05:24:52.177897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.737 [2024-12-09 05:24:52.177905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.737 [2024-12-09 05:24:52.190145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.737 [2024-12-09 05:24:52.190472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.737 [2024-12-09 05:24:52.190491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.737 [2024-12-09 05:24:52.190502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.737 [2024-12-09 05:24:52.190674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.737 [2024-12-09 05:24:52.190845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.738 [2024-12-09 05:24:52.190857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.738 [2024-12-09 05:24:52.190866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.738 [2024-12-09 05:24:52.190874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.738 [2024-12-09 05:24:52.203141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.203506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.203526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.203536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.203708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.203880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.203891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.203901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.203910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.216052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.216412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.216432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.216442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.216613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.216785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.216797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.216806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.216815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.229038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.229449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.229468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.229478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.229651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.229823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.229834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.229843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.229852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.242074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.242426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.242445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.242458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.242630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.242802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.242814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.242823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.242831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.255082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.255506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.255525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.255535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.255706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.255877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.255889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.255897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.255905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.267982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.268403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.268422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.268432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.268604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.268777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.268789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.268798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.268807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.280902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.281356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.281376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.281386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.281558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.281734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.281746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.281755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.281763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.293842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.294267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.294287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.294297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.294468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.294639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.294651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.294660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.294668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.997 [2024-12-09 05:24:52.306772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.997 [2024-12-09 05:24:52.307202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-12-09 05:24:52.307225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.997 [2024-12-09 05:24:52.307235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.997 [2024-12-09 05:24:52.307414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.997 [2024-12-09 05:24:52.307587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.997 [2024-12-09 05:24:52.307598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.997 [2024-12-09 05:24:52.307607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.997 [2024-12-09 05:24:52.307616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.319717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.320143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.320162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.320173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.320350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.320522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.320534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.320543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.320558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.332665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.333096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.333115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.333125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.333301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.333474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.333486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.333495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.333504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.345610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.345966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.345986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.345996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.346169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.346347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.346359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.346367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.346376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.358598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.359001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.359020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.359030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.359202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.359378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.359390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.359399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.359407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.371520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.371946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.371965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.371975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.372146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.372323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.372335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.372343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.372352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.384457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.384883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.384912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.385084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.385262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.385274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.385282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.385291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.397369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.397797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.397816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.397826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.397998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.398170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.398182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.398190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.398199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.410292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.410612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.410643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.410815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.410988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.410999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.411009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.411018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.423292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.423726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.423745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.423755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.423927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.424099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.424109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.424118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.424127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.998 [2024-12-09 05:24:52.436205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.998 [2024-12-09 05:24:52.436626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-12-09 05:24:52.436644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.998 [2024-12-09 05:24:52.436654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.998 [2024-12-09 05:24:52.436826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.998 [2024-12-09 05:24:52.436998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.998 [2024-12-09 05:24:52.437009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.998 [2024-12-09 05:24:52.437018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.998 [2024-12-09 05:24:52.437026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.999 [2024-12-09 05:24:52.449114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.999 [2024-12-09 05:24:52.449541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-12-09 05:24:52.449560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.999 [2024-12-09 05:24:52.449570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.999 [2024-12-09 05:24:52.449741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.999 [2024-12-09 05:24:52.449917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.999 [2024-12-09 05:24:52.449928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.999 [2024-12-09 05:24:52.449937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.999 [2024-12-09 05:24:52.449945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:09.999 [2024-12-09 05:24:52.462025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:09.999 [2024-12-09 05:24:52.462458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-12-09 05:24:52.462478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:09.999 [2024-12-09 05:24:52.462487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:09.999 [2024-12-09 05:24:52.462658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:09.999 [2024-12-09 05:24:52.463047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:09.999 [2024-12-09 05:24:52.463059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:09.999 [2024-12-09 05:24:52.463069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:09.999 [2024-12-09 05:24:52.463077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.258 [2024-12-09 05:24:52.475009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.258 [2024-12-09 05:24:52.475445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.258 [2024-12-09 05:24:52.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.258 [2024-12-09 05:24:52.475474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.258 [2024-12-09 05:24:52.475646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.258 [2024-12-09 05:24:52.475818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.258 [2024-12-09 05:24:52.475828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.258 [2024-12-09 05:24:52.475837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.258 [2024-12-09 05:24:52.475846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.487946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.488377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.488396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.488406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.488578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.488750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.488761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.488770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.488781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.500853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.501282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.501301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.501311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.501482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.501654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.501664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.501673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.501681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.513770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.514222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.514232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.514402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.514575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.514586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.514595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.514603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.526713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.527121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.527142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.527152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.527330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.527504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.527516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.527525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.527534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.539636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.539981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.539998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.540008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.540180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.540358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.540370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.540379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.540388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.552627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.552970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.552989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.552998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.553170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.553349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.553360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.553369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.553378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.565620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.566053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.566071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.566081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.566259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.566431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.566442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.566451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.566459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.578563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.578972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.578991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.579003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.579175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.579354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.579365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.579374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.579382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.591490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.591922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.591940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.591950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.592121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.259 [2024-12-09 05:24:52.592298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.259 [2024-12-09 05:24:52.592309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.259 [2024-12-09 05:24:52.592318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.259 [2024-12-09 05:24:52.592326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.259 [2024-12-09 05:24:52.604431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.259 [2024-12-09 05:24:52.604774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.259 [2024-12-09 05:24:52.604792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.259 [2024-12-09 05:24:52.604801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.259 [2024-12-09 05:24:52.604973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.605144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.605155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.605164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.605172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.260 [2024-12-09 05:24:52.617446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.617782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.617805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.617815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.617988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.618160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.618171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.618180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.618188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 [2024-12-09 05:24:52.630475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.630808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.630827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.630836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.631009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.631182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.631194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.631203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.631217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 [2024-12-09 05:24:52.643482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.643841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.643860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.643869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.644040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.644220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.644231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.644240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.644248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 [2024-12-09 05:24:52.656528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.656915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.656933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.656942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.657118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.657295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.657307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.657316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.657324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.260 [2024-12-09 05:24:52.666254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.260 [2024-12-09 05:24:52.669427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.669794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.669812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.669822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.669993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.670165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.670176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.670185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.670193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.260 [2024-12-09 05:24:52.682446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.682842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.682861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.682871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.683043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.683222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.683233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.683242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.683250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 [2024-12-09 05:24:52.695495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.695901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.695929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.696101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.696279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.696290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.696299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.696308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 [2024-12-09 05:24:52.708569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.260 [2024-12-09 05:24:52.708996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.260 [2024-12-09 05:24:52.709015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.260 [2024-12-09 05:24:52.709025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.260 [2024-12-09 05:24:52.709197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.260 [2024-12-09 05:24:52.709375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.260 [2024-12-09 05:24:52.709387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.260 [2024-12-09 05:24:52.709396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.260 [2024-12-09 05:24:52.709404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.260 Malloc0 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.260 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.261 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.261 [2024-12-09 05:24:52.721516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.261 [2024-12-09 05:24:52.721940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.261 [2024-12-09 05:24:52.721958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17edad0 with addr=10.0.0.2, port=4420 00:30:10.261 [2024-12-09 05:24:52.721968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17edad0 is same with the state(6) to be set 00:30:10.261 [2024-12-09 05:24:52.722139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17edad0 (9): Bad file descriptor 00:30:10.261 [2024-12-09 05:24:52.722316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:10.261 [2024-12-09 05:24:52.722328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:10.261 [2024-12-09 05:24:52.722337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:10.261 [2024-12-09 05:24:52.722348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:10.261 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.261 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.261 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.261 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.521 [2024-12-09 05:24:52.734448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:10.521 [2024-12-09 05:24:52.734453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.521 05:24:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 655967 00:30:10.521 [2024-12-09 05:24:52.761270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:11.720 4894.57 IOPS, 19.12 MiB/s [2024-12-09T04:24:55.128Z] 5727.62 IOPS, 22.37 MiB/s [2024-12-09T04:24:56.505Z] 6382.89 IOPS, 24.93 MiB/s [2024-12-09T04:24:57.443Z] 6894.50 IOPS, 26.93 MiB/s [2024-12-09T04:24:58.381Z] 7337.64 IOPS, 28.66 MiB/s [2024-12-09T04:24:59.318Z] 7695.08 IOPS, 30.06 MiB/s [2024-12-09T04:25:00.254Z] 8008.08 IOPS, 31.28 MiB/s [2024-12-09T04:25:01.191Z] 8257.29 IOPS, 32.26 MiB/s 00:30:18.721 Latency(us) 00:30:18.721 [2024-12-09T04:25:01.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.721 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:18.721 Verification LBA range: start 0x0 length 0x4000 00:30:18.721 Nvme1n1 : 15.00 8472.63 33.10 13274.92 0.00 5866.75 417.79 16043.21 00:30:18.721 [2024-12-09T04:25:01.191Z] =================================================================================================================== 00:30:18.721 [2024-12-09T04:25:01.191Z] Total : 8472.63 33.10 13274.92 0.00 5866.75 417.79 16043.21 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:18.980 rmmod nvme_tcp 00:30:18.980 rmmod nvme_fabrics 00:30:18.980 rmmod nvme_keyring 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 656930 ']' 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 656930 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 656930 ']' 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 656930 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.980 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656930 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656930' 00:30:19.239 killing process with pid 656930 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 656930 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 656930 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:19.239 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.497 05:25:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:21.406 00:30:21.406 real 0m28.369s 00:30:21.406 user 1m3.436s 00:30:21.406 sys 0m8.672s 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.406 ************************************ 00:30:21.406 END TEST nvmf_bdevperf 00:30:21.406 ************************************ 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.406 05:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.667 ************************************ 00:30:21.667 START TEST nvmf_target_disconnect 00:30:21.667 ************************************ 00:30:21.667 05:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:21.667 * Looking for test storage... 00:30:21.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:21.667 05:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.667 05:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.667 05:25:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.667 --rc genhtml_branch_coverage=1 00:30:21.667 --rc genhtml_function_coverage=1 00:30:21.667 --rc genhtml_legend=1 00:30:21.667 --rc geninfo_all_blocks=1 00:30:21.667 --rc geninfo_unexecuted_blocks=1 00:30:21.667 00:30:21.667 ' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.667 --rc genhtml_branch_coverage=1 00:30:21.667 --rc genhtml_function_coverage=1 00:30:21.667 --rc genhtml_legend=1 00:30:21.667 --rc geninfo_all_blocks=1 00:30:21.667 --rc geninfo_unexecuted_blocks=1 00:30:21.667 00:30:21.667 ' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.667 --rc genhtml_branch_coverage=1 00:30:21.667 --rc genhtml_function_coverage=1 00:30:21.667 --rc genhtml_legend=1 00:30:21.667 --rc geninfo_all_blocks=1 00:30:21.667 --rc geninfo_unexecuted_blocks=1 00:30:21.667 00:30:21.667 ' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.667 --rc genhtml_branch_coverage=1 00:30:21.667 --rc genhtml_function_coverage=1 00:30:21.667 --rc genhtml_legend=1 00:30:21.667 --rc geninfo_all_blocks=1 00:30:21.667 --rc geninfo_unexecuted_blocks=1 00:30:21.667 00:30:21.667 ' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:21.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.667 05:25:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:29.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:29.793 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:29.793 Found net devices under 0000:af:00.0: cvl_0_0 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:29.793 Found net devices under 0000:af:00.1: cvl_0_1 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.793 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:29.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:30:29.794 00:30:29.794 --- 10.0.0.2 ping statistics --- 00:30:29.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.794 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:29.794 00:30:29.794 --- 10.0.0.1 ping statistics --- 00:30:29.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.794 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:29.794 ************************************ 00:30:29.794 START TEST nvmf_target_disconnect_tc1 00:30:29.794 ************************************ 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.794 [2024-12-09 05:25:11.533295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.794 [2024-12-09 05:25:11.533422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd86ee0 with addr=10.0.0.2, port=4420 00:30:29.794 [2024-12-09 05:25:11.533498] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:29.794 [2024-12-09 05:25:11.533534] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:29.794 [2024-12-09 05:25:11.533562] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:29.794 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:29.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:29.794 Initializing NVMe Controllers 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:29.794 00:30:29.794 real 0m0.179s 00:30:29.794 user 0m0.093s 00:30:29.794 sys 0m0.087s 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:29.794 ************************************ 00:30:29.794 END TEST nvmf_target_disconnect_tc1 00:30:29.794 ************************************ 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:29.794 ************************************ 00:30:29.794 START TEST nvmf_target_disconnect_tc2 00:30:29.794 ************************************ 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=662380 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 662380 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 662380 ']' 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.794 05:25:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.794 [2024-12-09 05:25:11.726023] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:29.794 [2024-12-09 05:25:11.726071] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.794 [2024-12-09 05:25:11.825255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:29.794 [2024-12-09 05:25:11.866595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.794 [2024-12-09 05:25:11.866633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.795 [2024-12-09 05:25:11.866643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.795 [2024-12-09 05:25:11.866651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.795 [2024-12-09 05:25:11.866658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.795 [2024-12-09 05:25:11.868395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:29.795 [2024-12-09 05:25:11.868505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:29.795 [2024-12-09 05:25:11.868614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:29.795 [2024-12-09 05:25:11.868615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 Malloc0 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 [2024-12-09 05:25:12.654478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 [2024-12-09 05:25:12.686763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=662481 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:30.362 05:25:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.269 05:25:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 662380 00:30:32.269 05:25:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 [2024-12-09 05:25:14.716845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 [2024-12-09 05:25:14.717071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Write completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.269 starting I/O failed 00:30:32.269 [2024-12-09 05:25:14.717302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.269 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Write completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 Read completed with error (sct=0, sc=8) 00:30:32.270 starting I/O failed 00:30:32.270 [2024-12-09 05:25:14.717533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.270 [2024-12-09 05:25:14.717754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.717778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.718048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.718060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.718265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.718288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.718411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.718424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.718622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.718635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.718895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.718907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.719955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.719967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.720102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.720114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.720267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.720280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.720444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.720456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.720555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.720600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.720816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.720857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.721012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.721053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.721291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.721303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.721482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.721523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.721845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.721886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.722096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.722136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.722366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.722408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.722582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.722624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.722783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.722823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.723021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.723062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.270 [2024-12-09 05:25:14.723229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.270 [2024-12-09 05:25:14.723271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.270 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.723462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.723628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.723669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.723934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.723975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.724263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.724304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.724502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.724514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.724616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.724631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.724902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.724916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.725780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.725793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.726953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.726994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.727161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.727202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.727476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.727517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.727690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.727991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.728032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.728336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.728379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.728551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.728592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.728793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.729061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.729101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.729316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.729384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.729638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.729718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.730056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.730125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.730334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.730348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.730480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.730493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.730634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.730647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.730745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.730758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.731015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.271 [2024-12-09 05:25:14.731028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.271 qpair failed and we were unable to recover it. 00:30:32.271 [2024-12-09 05:25:14.731232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.731245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.731399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.731412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.731517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.731529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.731634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.731646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.731879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.731892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.732118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.732158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.732591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.732631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.732792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.732840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.733953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.733966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.734139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.734314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.734479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.734577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.734761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.734995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.735143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.735350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.735541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.735743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.735918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.735935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.736182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.736199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.272 [2024-12-09 05:25:14.736319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.272 [2024-12-09 05:25:14.736337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.272 qpair failed and we were unable to recover it. 00:30:32.547 [2024-12-09 05:25:14.736450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.547 [2024-12-09 05:25:14.736468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.547 qpair failed and we were unable to recover it. 00:30:32.547 [2024-12-09 05:25:14.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.547 [2024-12-09 05:25:14.736613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.547 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.736727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.736745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.736972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.736989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.737168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.737185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.737284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.737301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.737407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.737424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.737616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.737641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.737910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.737931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.738021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.738038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.738320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.738338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.738554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.738694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.738735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.739006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.739050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.739229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.739247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.739359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.739406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.739560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.739601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.739752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.739793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.740041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.740216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.740259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.740455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.740472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.740582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.740620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.740862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.740903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.741179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.741260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.741472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.741513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.741720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.741759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.741987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.742226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.742427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.742551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.742730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.742904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.742921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.743160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.743177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.743351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.743369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.743567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.743608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.743771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.743811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.744032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.744072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.744292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.548 [2024-12-09 05:25:14.744310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.548 qpair failed and we were unable to recover it. 00:30:32.548 [2024-12-09 05:25:14.744501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.744518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.744674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.744691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.744909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.744932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.745052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.745074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.745305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.745328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.745528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.745549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.745695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.745712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.745928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.745968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.746125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.746165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.746430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.746518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.746810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.746854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.747150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.747180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.747389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.747408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.747506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.747525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.747686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.747703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.747918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.747935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.748084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.748102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.748355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.748396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.748564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.748605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.748821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.748862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.749065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.749105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.749395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.749437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.749653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.749670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.749974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.750089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.750319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.750336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.750519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.750536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.750710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.750750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.750962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.751002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.751312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.751353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.751659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.751700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.751973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.752271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.752406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.752535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.752709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.752886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.752906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.549 [2024-12-09 05:25:14.753126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.549 [2024-12-09 05:25:14.753143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.549 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.753295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.753312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.753444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.753602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.753619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.753772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.753789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.754032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.754072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.754271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.754313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.754509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.754864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.754904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.755198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.755248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.755533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.755574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.755734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.755774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.756385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.756426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.756586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.756603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.756742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.756759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.757015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.757346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.757388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.757625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.757665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.757924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.757963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.758169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.758222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.758494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.758535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.758685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.758724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.758935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.758975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.759174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.759191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.759460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.759477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.759638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.759655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.759765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.759782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.760967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.760982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.761122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.761139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.761385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.761403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.761630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.761647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.761907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.550 [2024-12-09 05:25:14.761924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.550 qpair failed and we were unable to recover it. 00:30:32.550 [2024-12-09 05:25:14.762152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.762169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.762407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.762425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.762588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.762605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.762892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.762932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.763977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.763994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.764139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.764156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.764370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.764387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.764598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.764615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.764855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.765978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.766171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.766188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.766373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.766390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.766550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.766598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.766802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.766842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.767053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.767093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.767350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.767530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.767547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.767701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.767718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.767939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.767956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.768162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.768182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.768364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.768382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.768501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.768711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.768728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.768875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.768892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.769044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.769061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.769220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.769240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.769403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.551 [2024-12-09 05:25:14.769638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.551 [2024-12-09 05:25:14.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.551 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.769950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.769993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.770202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.770252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.770517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.770533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.770685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.770873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.770908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.771181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.771249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.771458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.771498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.771678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.771695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.771850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.771868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.772015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.772032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.772267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.772285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.772511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.772528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.772700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.772717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.772978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.773019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.773329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.773371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.773651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.773668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.773947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.774121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.774138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.774380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.774401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.774652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.774669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.774902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.774919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.775168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.775185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.775366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.775383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.775661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.775701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.775918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.775958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.776279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.776321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.776609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.776649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.776929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.776969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.777254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.777295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.777559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.777600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.777808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.777848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.778125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.778165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.552 [2024-12-09 05:25:14.778447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.552 [2024-12-09 05:25:14.778465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.552 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.778671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.778688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.778923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.778940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.779202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.779236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.779385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.779402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.779654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.779695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.779988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.780028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.780293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.780335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.780610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.780650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.780920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.780961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.781231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.781272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.781566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.781606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.781761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.781801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.782016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.782062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.782352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.782523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.782540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.782762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.782802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.783088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.783129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.783451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.783493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.783707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.783747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.784023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.784064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.784346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.784363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.784620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.784637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.784887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.784904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.785132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.785149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.785293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.785311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.785512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.785529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.785775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.785792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.786880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.786898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.787040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.787057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.787291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.787308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.787471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.787488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.787710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.787750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.553 [2024-12-09 05:25:14.788010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.553 [2024-12-09 05:25:14.788051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.553 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.788332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.788375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.788529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.788568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.788794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.788835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.789144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.789185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.789465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.789505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.789780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.789820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.790093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.790134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.790319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.790337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.790490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.790507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.790741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.790758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.790990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.791006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.791189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.791206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.791420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.791437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.791684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.791724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.792033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.792073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.792304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.792346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.792659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.792699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.792960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.793000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.793241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.793282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.793572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.793589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.793688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.793704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.793917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.793934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.794198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.794219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.794330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.794589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.794629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.794946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.794986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.795287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.795329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.795593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.795877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.795917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.796246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.796290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.796510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.796550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.796880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.797122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.797162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.797397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.797438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.797747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.797787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.798069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.798109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.798389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.798406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.554 [2024-12-09 05:25:14.798618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.554 [2024-12-09 05:25:14.798635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.554 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.798791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.798808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.798990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.799031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.799177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.799226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.799444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.799485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.799726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.799870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.799887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.800123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.800390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.800408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.800643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.800839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.800856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.801091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.801108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.801252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.801269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.801519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.801536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.801774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.801791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.801971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.801988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.802204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.802257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.802789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.802829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.803145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.803377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.803550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.803644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.803839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.804100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.804117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.804266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.804284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.804538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.804725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.804742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.804996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.805013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.805246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.805263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.805492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.805509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.805765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.805785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.805941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.805958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.806227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.806270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.806569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.806609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.806821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.806838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.807068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.807086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.807319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.555 [2024-12-09 05:25:14.807337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.555 qpair failed and we were unable to recover it. 00:30:32.555 [2024-12-09 05:25:14.807574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.807592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.807773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.807790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.808011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.808052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.808295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.808337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.808630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.808670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.808975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.809016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.809240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.809282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.809595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.809636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.809943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.809983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.810259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.810311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.810551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.810568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.810726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.810744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.810925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.810977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.811221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.811489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.811529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.811785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.812039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.812079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.812390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.812432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.812698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.812715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.812878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.812895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.813051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.813319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.813337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.813649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.813746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.813762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.813919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.813936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.814046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.814062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.814295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.814313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.814589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.814608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.814818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.815084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.815101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.815311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.815329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.815489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.815758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.815804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.816143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.816190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.816371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.816389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.816555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.816573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.816798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.816815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.816994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.817011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.556 qpair failed and we were unable to recover it. 00:30:32.556 [2024-12-09 05:25:14.817245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.556 [2024-12-09 05:25:14.817264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.817438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.817455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.817696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.817737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.818021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.818062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.818310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.818351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.818624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.818641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.818751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.818767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.818931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.818984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.819255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.819297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.819538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.819764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.819782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.819995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.820097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.820352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.820582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.820757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.820956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.820973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.821122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.821140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.821373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.821391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.821617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.821634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.821825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.821843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.822005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.822023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.822270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.822310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.822600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.822646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.822890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.822931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.823228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.823543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.823560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.823789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.823806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.823900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.823916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.824181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.824230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.824511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.824552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.824837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.824877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.557 qpair failed and we were unable to recover it. 00:30:32.557 [2024-12-09 05:25:14.825142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.557 [2024-12-09 05:25:14.825183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.825462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.825503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.825805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.825845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.826110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.826151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.826361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.826378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.826628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.826646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.826791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.826808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.827013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.827031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.827224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.827242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.827496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.827514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.827663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.827681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.827847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.827864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.828145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.828186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.828504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.828856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.828896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.829172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.829474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.829492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.829748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.829765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.829997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.830017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.830273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.830291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.830471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.830489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.830713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.830731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.830973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.830990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.831165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.831408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.831452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.831773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.831828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.832069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.832110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.832392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.832410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.832575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.832735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.832752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.832965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.832983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.833220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.833237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.833409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.833427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.833709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.833973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.834014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.834297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.834338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.834587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.834628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.834915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.834932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.558 [2024-12-09 05:25:14.835094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.558 [2024-12-09 05:25:14.835112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.558 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.835279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.835297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.835532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.835549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.835825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.835843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.836084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.836101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.836269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.836287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.836476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.836531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.836847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.836894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.837112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.837152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.837402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.837462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.837745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.837786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.838120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.838161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.838479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.838497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.838671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.838689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.838853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.838907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.839231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.839273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.839487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.839528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.839749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.839790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.840006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.840046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.840253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.840297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.840553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.840775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.840793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.841958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.841974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.842188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.842206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.842441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.842482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.842749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.842790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.843021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.843062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.843353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.843395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.843638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.843931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.843948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.844100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.844117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.844277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.844295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.844571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.844750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.559 [2024-12-09 05:25:14.844768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.559 qpair failed and we were unable to recover it. 00:30:32.559 [2024-12-09 05:25:14.844971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.845012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.845231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.845273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.845514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.845554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.845847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.846203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.846260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.846573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.846791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.846832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.847067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.847107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.847449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.847661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.847918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.848176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.848194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.848447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.848465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.848715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.848963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.848981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.849223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.849241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.849334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.849351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.849609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.849651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.849797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.849837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.850137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.850178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.850451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.850469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.850717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.850908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.850926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.851170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.851305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.851322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.851501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.851519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.851745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.851787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.852006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.852047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.852363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.852405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.852700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.852741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.852952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.852993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.853232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.853556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.853596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.853830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.853848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.854085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.854103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.854262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.854280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.854523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.854544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.854808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.560 [2024-12-09 05:25:14.854826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.560 qpair failed and we were unable to recover it. 00:30:32.560 [2024-12-09 05:25:14.855044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.855062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.855300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.855318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.855585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.855603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.855763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.856007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.856052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.856366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.856407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.856708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.856748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.857041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.857082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.857384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.857425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.857741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.857781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.857987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.858005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.858157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.858175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.858430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.858448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.858699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.858716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.858950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.859220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.859238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.859480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.859668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.859686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.859874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.859892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.860183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.860464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.860505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.860818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.860859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.861087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.861127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.861447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.861489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.861832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.861849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.862086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.862133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.862387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.862405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.862626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.862644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.862890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.862908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.863082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.863099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.863389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.863407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.863559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.863577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.863755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.863772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.864019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.864037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.561 qpair failed and we were unable to recover it. 00:30:32.561 [2024-12-09 05:25:14.864274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.561 [2024-12-09 05:25:14.864292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.864537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.864554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.864805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.864822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.865054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.865071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.865255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.865273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.865448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.865466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.865686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.865704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.865973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.865992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.866157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.866175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.866384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.866427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.866744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.866785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.867082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.867123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.867423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.867465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.867779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.868048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.868066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.868239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.868258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.868441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.868681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.868722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.869059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.869100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.869328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.869370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.869598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.869639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.869934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.869951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.870247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.870289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.870570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.870611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.870903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.870920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.871107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.871318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.871337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.871580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.871621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.871906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.871947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.872167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.872221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.872522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.872563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.872804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.872821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.873089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.873106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.873297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.873316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.873560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.873578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.873739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.873757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.874000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.874018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.874238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.562 [2024-12-09 05:25:14.874257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.562 qpair failed and we were unable to recover it. 00:30:32.562 [2024-12-09 05:25:14.874499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.874516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.874783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.874801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.875041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.875060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.875325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.875343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.875585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.875871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.875889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.876110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.876128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.876300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.876318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.876546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.876682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.876700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.876949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.876967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.877169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.877187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.877357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.877375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.877635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.877675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.877946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.877987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.878268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.878311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.878576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.878593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.878762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.878965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.879006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.879239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.879282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.879576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.879620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.879886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.879907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.880170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.880413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.880432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.880672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.880689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.880934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.880952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.881193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.881222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.881440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.881457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.881634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.881652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.881865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.881906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.882199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.882521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.882562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.882837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.882879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.883133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.883173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.883487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.883528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.883826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.883844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.884083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.884100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.563 qpair failed and we were unable to recover it. 00:30:32.563 [2024-12-09 05:25:14.884219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.563 [2024-12-09 05:25:14.884236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.884481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.884499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.884718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.884765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.885066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.885107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.885400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.885443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.885766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.886063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.886103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.886407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.886449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.886666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.886707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.886983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.887023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.887329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.887371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.887666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.887713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.888018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.888059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.888308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.888593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.888633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.888897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.888915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.889179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.889447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.889465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.889729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.889747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.889975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.889993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.890230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.890248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.890410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.890428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.890672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.890690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.890841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.890858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.890953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.890969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.891139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.891157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.891349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.891367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.891562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.891602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.891892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.891933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.892185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.892237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.892541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.892581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.892873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.892891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.893117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.893134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.893383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.893400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.893551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.893569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.893742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.893780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.894074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.894115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.894428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.894471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.564 [2024-12-09 05:25:14.894704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.564 [2024-12-09 05:25:14.894751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.564 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.895055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.895095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.895251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.895293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.895580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.895934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.895952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.896212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.896231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.896400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.896418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.896523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.896539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.896700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.896718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.896961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.896980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.897160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.897177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.897444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.897462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.897693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.897711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.897893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.897911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.898171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.898225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.898519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.898560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.898840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.899036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.899054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.899302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.899321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.899493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.899511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.899697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.899715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.899873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.899890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.900135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.900153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.900397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.900416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.900654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.900671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.900839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.900856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.901080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.901122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.901483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.901716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.901734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.901951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.901969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.902236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.902467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.902485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.902727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.902745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.902963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.902981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.903129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.903147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.903299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.903317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.903500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.903518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.903738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.565 [2024-12-09 05:25:14.903756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.565 qpair failed and we were unable to recover it. 00:30:32.565 [2024-12-09 05:25:14.904019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.904037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.904284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.904302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.904542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.904561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.904752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.904770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.904979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.905195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.905218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.905418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.905654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.905889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.905929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.906217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.906260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.906553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.906593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.906814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.906855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.907077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.907118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.907398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.907441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.907662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.907680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.907914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.908181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.908230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.908556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.908598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.908896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.908936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.909233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.909274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.909435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.909704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.909745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.910036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.910053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.910219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.910237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.910424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.910465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.910736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.911026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.911067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.911289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.911331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.911569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.911609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.911877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.911895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.912163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.912184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.912292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.912309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.912460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.912478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.566 [2024-12-09 05:25:14.912717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.566 [2024-12-09 05:25:14.912734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.566 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.912905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.912922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.913044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.913062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.913304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.913322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.913565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.913583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.913822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.913840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.914004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.914022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.914265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.914283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.914548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.914589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.914844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.914885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.915181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.915244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.915473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.915514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.915681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.915722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.915933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.915956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.916192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.916216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.916438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.916456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.916677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.916695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.916917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.916934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.917179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.917198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.917390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.917408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.917705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.917952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.917970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.918087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.918105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.918334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.918399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.918670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.918719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.918996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.919036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.919333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.919375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.919669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.919711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.920014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.920054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.920296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.920337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.920562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.920603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.920848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.920866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.920957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.920973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.921238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.921405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.921424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.921611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.921629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.921871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.921889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.922059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.922077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.567 [2024-12-09 05:25:14.922242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.567 [2024-12-09 05:25:14.922279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.567 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.922579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.922620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.922884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.923075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.923093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.923253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.923271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.923484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.923502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.923674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.923691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.923857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.923906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.924197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.924249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.924530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.924572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.924908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.925065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.925106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.925405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.925448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.925717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.925734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.925849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.925867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.926061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.926079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.926272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.926291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.926448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.926465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.926713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.926731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.926981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.927023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.927321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.927363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.927669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.927709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.927951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.927992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.928267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.928580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.928621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.928869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.928911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.929220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.929262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.929581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.929622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.929845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.929886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.930164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.930205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.930516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.930557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.930856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.930898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.931205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.931280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.931641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.931924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.931965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.932268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.932310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.932584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.932625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.932918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.932959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.568 [2024-12-09 05:25:14.933183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.568 [2024-12-09 05:25:14.933236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.568 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.933547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.933587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.933886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.933926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.934242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.934506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.934547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.934870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.934911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.935221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.935263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.935560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.935897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.935938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.936176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.936230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.936488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.936704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.936722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.936965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.936983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.937225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.937243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.937485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.937503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.937663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.937681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.937902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.937944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.938162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.938203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.938456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.938496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.938810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.939132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.939172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.939419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.939460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.939676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.939717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.939854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.939872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.940116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.940134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.940385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.940429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.940704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.940745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.941037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.941055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.941242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.941261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.941433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.941451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.941678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.941696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.941956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.941974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.942127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.569 [2024-12-09 05:25:14.942145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.569 qpair failed and we were unable to recover it. 00:30:32.569 [2024-12-09 05:25:14.942389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.942599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.942639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.942861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.942902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.943247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.943289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.943579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.943619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.943930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.943971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.944267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.944310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.944553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.944593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.944808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.944849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.945125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.945143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.945411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.945661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.945679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.945846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.945864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.946115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.946397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.946439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.946745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.946786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.947077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.947118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.947418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.947460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.947772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.947813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.948154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.948464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.948506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.948808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.948848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.949153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.949195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.949482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.949523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.949751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.949791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.570 [2024-12-09 05:25:14.949983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.570 [2024-12-09 05:25:14.950000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.570 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.950245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.950288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.950505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.950545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.950862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.950903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.951179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.951248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.951523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.951564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.951861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.951901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.952196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.952247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.952553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.952594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.952891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.952932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.953239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.953282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.953527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.953568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.953835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.953855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.954076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.954094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.954287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.954305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.954553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.954570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.954740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.954757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.954929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.954947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.955152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.955193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.955509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.955551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.955816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.956083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.956101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.956348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.571 [2024-12-09 05:25:14.956366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.571 qpair failed and we were unable to recover it. 00:30:32.571 [2024-12-09 05:25:14.956535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.956553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.956763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.956937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.956978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.957274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.957317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.957608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.957648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.957943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.957984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.958276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.958318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.958566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.958607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.958832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.958873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.959163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.959204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.959452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.959493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.959715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.959756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.960013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.960031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.960300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.960318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.960554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.960571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.960799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.960817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.961062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.961080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.961329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.961348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.961464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.961481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.961744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.961783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.962093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.962134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.962441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.962484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.962697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.962714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.962911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.963101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.963118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.963283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.572 [2024-12-09 05:25:14.963336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.572 qpair failed and we were unable to recover it. 00:30:32.572 [2024-12-09 05:25:14.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.963596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.963918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.963959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.964285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.964327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.964623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.964664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.964976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.965020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.965261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.965303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.965606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.965648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.965920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.965958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.966152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.966170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.966398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.966575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.966593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.966836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.966854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.967104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.967121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.967352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.967371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.967540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.967558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.967739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.967757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.967998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.968016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.968256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.968275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.968546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.968676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.968849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.968867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.969018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.969036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.969281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.969300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.969493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.969511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.969764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.969782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.573 [2024-12-09 05:25:14.969945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.573 [2024-12-09 05:25:14.969962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.573 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.970133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.970151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.970400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.970686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.970704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.970879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.970897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.971142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.971160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.971278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.971300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.971526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.971544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.971731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.971749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.972015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.972033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.972144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.972160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.972356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.972375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.972534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.972772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.972793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.973041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.973062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.973315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.973334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.973560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.973578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.973824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.973842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.974109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.974341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.974359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.974603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.974621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.974845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.974862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.975108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.975127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.975354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.975373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.975560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.975578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.975737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.975755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.976004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.976022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.976263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.976282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.976524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.976542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.976713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.976732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.976953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.574 [2024-12-09 05:25:14.976970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.574 qpair failed and we were unable to recover it. 00:30:32.574 [2024-12-09 05:25:14.977154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.977440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.977630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.977651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.977899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.977917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.978196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.978219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.978489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.978659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.978677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.978873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.979105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.979122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.979344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.979363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.979554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.979572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.979841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.979859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.980029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.980047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.980272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.980290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.980474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.980492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.980595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.980612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.980883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.980901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.981054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.981249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.981268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.981490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.981508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.981723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.981740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.981969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.981986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.982095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.982111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.982354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.982372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.982568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.982586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.982856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.982875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.983037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.983054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.983252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.983270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.983513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.983531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.983698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.983716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.983966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.983983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.984202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.984226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.984504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.984521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.984687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.984705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.984879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.985137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.985155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.985559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.575 [2024-12-09 05:25:14.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.575 qpair failed and we were unable to recover it. 00:30:32.575 [2024-12-09 05:25:14.985814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.985833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.986086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.986104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.986346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.986364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.986607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.986625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.986793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.986810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.987060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.987078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.987321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.987340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.987596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.987819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.987837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.987940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.987956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.988122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.988361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.988379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.988587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.988606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.988865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.988883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.989129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.989147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.989250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.989267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.989485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.989503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.989599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.989615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.989791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.989809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.990043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.990062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.990247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.990265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.990409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.990668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.990686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.990897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.990915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.991137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.991155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.991387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.991406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.991682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.991700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.991953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.991972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.992218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.992236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.992462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.992480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.992653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.992671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.992928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.992946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.993230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.993252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.993351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.993368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.993617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.993635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.993818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.994005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.994024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.994195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.994228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.576 [2024-12-09 05:25:14.994472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.576 [2024-12-09 05:25:14.994490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.576 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.994686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.994704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.994940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.994958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.995157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.995175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.995430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.995644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.995662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.995828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.995845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.996081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.996099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.996314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.996550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.996567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.996817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.996835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.997021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.997039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.997227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.997245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.997466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.997483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.997671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.997688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.997883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.997902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.998051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.998069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.998233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.998474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.998683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.998700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.998892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.998914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.999087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.999108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.577 [2024-12-09 05:25:14.999269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.577 [2024-12-09 05:25:14.999288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.577 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:14.999522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.854 [2024-12-09 05:25:14.999542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.854 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:14.999721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.854 [2024-12-09 05:25:14.999740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.854 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:14.999993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.854 [2024-12-09 05:25:15.000014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.854 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:15.000242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.854 [2024-12-09 05:25:15.000263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.854 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:15.000501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.854 [2024-12-09 05:25:15.000519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.854 qpair failed and we were unable to recover it. 00:30:32.854 [2024-12-09 05:25:15.000705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.000723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.000961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.000979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.001176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.001193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.001378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.001396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.001563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.001581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.001689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.001706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.001949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.001968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.002144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.002162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.002462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.002480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.002562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.002578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.002766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.002782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.002942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.002960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.003191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.003216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.003379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.003397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.003485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.003501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.003674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.003693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.003921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.004115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.004133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.004342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.004362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.004606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.004624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.004795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.004821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.005109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.005127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.005259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.005278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.005535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.005554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.005728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.006005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.006301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.006320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.006499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.006516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.006758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.855 [2024-12-09 05:25:15.006775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.855 qpair failed and we were unable to recover it. 00:30:32.855 [2024-12-09 05:25:15.006949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.006967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.007191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.007216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.007401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.007418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.007776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.007793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.008033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.008051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.008223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.008240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.008344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.008361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.008648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.008666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.008839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.008857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.009078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.009096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.009347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.009366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.009554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.009801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.009819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.009989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.010201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.010330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.010460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.010606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.010781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.010798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.011047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.011065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.011312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.011330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.011560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.011578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.011851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.011869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.012113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.012131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.012303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.012322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.012562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.012852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.856 [2024-12-09 05:25:15.012870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.856 qpair failed and we were unable to recover it. 00:30:32.856 [2024-12-09 05:25:15.013032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.013049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.013290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.013309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.013552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.013570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.013830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.013981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.014001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.014171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.014189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.014366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.014385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.014558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.014577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.014826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.014844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.015133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.015150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.015372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.015391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.015562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.015580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.015826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.015847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.016104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.016123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.016386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.016404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.016574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.016592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.016820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.016838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.017081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.017099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.017298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.017567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.017585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.017786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.017804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.017964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.017981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.018193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.018219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.018498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.018516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.018749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.018767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.018881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.019077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.019095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.019206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.019230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.019473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.019495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.019644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.019661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.857 qpair failed and we were unable to recover it. 00:30:32.857 [2024-12-09 05:25:15.019831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.857 [2024-12-09 05:25:15.019848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.020020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.020041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.020289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.020489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.020506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.020785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.020803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.021072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.021089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.021326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.021344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.021586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.021605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.021781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.021799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.021904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.021920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.022184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.022203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.022509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.022701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.022719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.022879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.022896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.023141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.023159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.023386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.023404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.023636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.023653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.023873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.023892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.024129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.024147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.024311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.024329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.024524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.024542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.024804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.024821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.025065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.858 [2024-12-09 05:25:15.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.858 qpair failed and we were unable to recover it. 00:30:32.858 [2024-12-09 05:25:15.025329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.025349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.025522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.025539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.025652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.025668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.025912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.025929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.026170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.026187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.026424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.026446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.026678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.026695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.026921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.026938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.027205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.027244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.027491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.027509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.027679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.027696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.027867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.861 [2024-12-09 05:25:15.027886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.861 qpair failed and we were unable to recover it. 00:30:32.861 [2024-12-09 05:25:15.028114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.028131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.028295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.028313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.028536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.028554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.028812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.028830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.029078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.029096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.029337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.029356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.029508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.029526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.029793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.029811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.030060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.030078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.030239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.030257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.030501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.030519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.030682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.030699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.030865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.030883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.031068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.031085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.031333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.031351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.031568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.031586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.031738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.031755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.031997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.032016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.032244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.032263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.032517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.032535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.032703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.032721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.032823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.032839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.033063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.033082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.033304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.033322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.033587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.033606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.033778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.034069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.034088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.034321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.034339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.034498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.034516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.034698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.034716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.034950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.034968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.035220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.035238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.035426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.035443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.035672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.035689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.035884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.035902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.036169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.036187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.036465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.036483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.036705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.036724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.036965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.036985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.037234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.037253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.037489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.037507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.037745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.037763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.038010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.038027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.038272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.862 [2024-12-09 05:25:15.038290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.862 qpair failed and we were unable to recover it. 00:30:32.862 [2024-12-09 05:25:15.038524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.038542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.038704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.038721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.039190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.039381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.039575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.039755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.039982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.040139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.040156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.040346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.040583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.040834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.040853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.040973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.041185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.041205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.041389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.041409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.041583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.041601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.041766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.041785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.042019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.042041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.042276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.042296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.042469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.042488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.042733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.042752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.043013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.043032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.043201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.043227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.043383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.043400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.043564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.043582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.043829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.043847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.044068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.044086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.044295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.044314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.044570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.044590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.044762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.044971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.044991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.045234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.045253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.045494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.045512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.045674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.045691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.045903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.045921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.046142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.046161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.046433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.046452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.046621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.046640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.046838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.046861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.047026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.047043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.047259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.047278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.047496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.047793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.047811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.047959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.047978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.048186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.048225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.048417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.048593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.048610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.048792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.048810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.049033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.049051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.049292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.049312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.049562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.049581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.049803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.049821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.049997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.050015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.050258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.050277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.050540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.050558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.050729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.050746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.050908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.050925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.051165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.051183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.051392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.051410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.051571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.863 [2024-12-09 05:25:15.051589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.863 qpair failed and we were unable to recover it. 00:30:32.863 [2024-12-09 05:25:15.051836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.051854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.052025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.052218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.052236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.052469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.052487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.052725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.052743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.053004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.053021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.053201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.053227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.053470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.053488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.053749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.053767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.053995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.054090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.054349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.054564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.054744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.054885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.054902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.055123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.055140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.055327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.055481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.055499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.055651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.055669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.055863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.056086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.056103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.056188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.056204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.056476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.056495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.056682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.056700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.056866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.056883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.057123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.057141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.057289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.057307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.057550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.057568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.057809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.057827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.057994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.058011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.058257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.058275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.058522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.058539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.058778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.058795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.058968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.058986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.059168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.059186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.059391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.059410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.059620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.059638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.059831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.059848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.060019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.060269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.060287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.060408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.060425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.060666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.060685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.060865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.060883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.061132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.061150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.061312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.061330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.061536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.061658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.061675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.061923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.061942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.062132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.062149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.062401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.062419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.062648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.864 [2024-12-09 05:25:15.062666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.864 qpair failed and we were unable to recover it. 00:30:32.864 [2024-12-09 05:25:15.062763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.062778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.062968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.062991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.063183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.063201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.063432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.063450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.063716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.063964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.063981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.064242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.064480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.064498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.064741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.064759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.065005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.065022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.065232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.065415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.065433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.065627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.065644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.065829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.065847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.066022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.066040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.066283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.066303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.066548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.066566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.066791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.066809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.066909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.066925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.067032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.067048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.067218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.067422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.067439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.067656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.067674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.067903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.067920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.068143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.068160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.068427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.068446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.068642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.865 [2024-12-09 05:25:15.068659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.865 qpair failed and we were unable to recover it. 00:30:32.865 [2024-12-09 05:25:15.068847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.068865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.069110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.069132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.069375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.069393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.069635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.069653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.069876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.069894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.070074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.070091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.070325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.070344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.070532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.070549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.070836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.071106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.071124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.071296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.071314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.071515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.071534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.071756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.071773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.071957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.071975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.072138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.072156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.072415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.072433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.072679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.072697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.072938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.072956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.073200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.073226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.073471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.073488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.073593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.073609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.073847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.073865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.074102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.074120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.074369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.074387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.074502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.074518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.074766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.074783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.074933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.074950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.075121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.075138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.866 [2024-12-09 05:25:15.075387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.866 [2024-12-09 05:25:15.075409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.866 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.075574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.075592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.075786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.075804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.076044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.076061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.076234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.076252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.076373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.076390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.076642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.076834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.076851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.077106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.077124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.077350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.077369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.077630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.077648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.077769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.077786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.078970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.078987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.079230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.079249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.867 qpair failed and we were unable to recover it. 00:30:32.867 [2024-12-09 05:25:15.079423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.867 [2024-12-09 05:25:15.079441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.079557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.079574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.079809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.079827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.080052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.080070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.080359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.080522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.080540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.080718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.080735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.080903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.080920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.081142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.081407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.081425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.081648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.081911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.081928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.082174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.082192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.082375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.082393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.082648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.082666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.082924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.083147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.083165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.083350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.083575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.083593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.083705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.083721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.083962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.083979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.084227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.084245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.084409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.084426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.084673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.084691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.084934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.084952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.085194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.085219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.085457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.868 [2024-12-09 05:25:15.085474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.868 qpair failed and we were unable to recover it. 00:30:32.868 [2024-12-09 05:25:15.085718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.085737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.085958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.085975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.086137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.086155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.086310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.086328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.086483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.086501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.086761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.086779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.086999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.087018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.087182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.087199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.087395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.087413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.087661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.087679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.087856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.087874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.088069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.088088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.088260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.088279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.088522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.088540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.088780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.089019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.089037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.089149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.089408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.089427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.089704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.089721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.089888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.089905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.090146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.090378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.090396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.090489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.090509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.090753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.090772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.090936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.090954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.091148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.091166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.091435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.091453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.091723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.869 [2024-12-09 05:25:15.091741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.869 qpair failed and we were unable to recover it. 00:30:32.869 [2024-12-09 05:25:15.091852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.091869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.092066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.092083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.092259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.092442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.092460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.092700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.092813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.093076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.093095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.093364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.093382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.093538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.093556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.093736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.093754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.093991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.094168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.094296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.094533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.094788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.094918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.094936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.095027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.095297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.095316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.095534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.095553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.095645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.095661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.095831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.095849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.096007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.096031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.096280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.096298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.096537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.096556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.096721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.096738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.096831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.096848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.097069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.097087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.870 [2024-12-09 05:25:15.097239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.870 [2024-12-09 05:25:15.097256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.870 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.097357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.097373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.097552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.097816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.097833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.098072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.098090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.098309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.098327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.098500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.098517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.098686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.098705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.098927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.098945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.099042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.099227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.099356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.099546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.099804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.100189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.100215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.100326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.100344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.100502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.100519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.100738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.100756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.101879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.101897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.871 qpair failed and we were unable to recover it. 00:30:32.871 [2024-12-09 05:25:15.102865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.871 [2024-12-09 05:25:15.102883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.103060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.103078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.103249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.103267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.103487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.103505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.103665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.103682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.103901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.103919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.104013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.104030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.104222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.104240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.104410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.104427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.104646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.104664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.104870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.104888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.105042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.105059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.105164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.105181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.105384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.105402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.105566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.105583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.105809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.105826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.106093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.106111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.106352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.106371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.106535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.106773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.106791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.107007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.107024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.107244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.107263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.107491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.107509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.107673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.107690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.107923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.107940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.108181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.108199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.108294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.108310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.108543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.108808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.108825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.109063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.109081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.109322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.109340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.109441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.872 [2024-12-09 05:25:15.109458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.872 qpair failed and we were unable to recover it. 00:30:32.872 [2024-12-09 05:25:15.109625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.109645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.109758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.109775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.109963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.109981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.110151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.110557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.110673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.110771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.110987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.111177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.111716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.111899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.111916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.112092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.112110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.112270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.112300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.112476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.112493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.112641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.112658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.112815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.112834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.113004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.113021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.113112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.873 [2024-12-09 05:25:15.113127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.873 qpair failed and we were unable to recover it. 00:30:32.873 [2024-12-09 05:25:15.113273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.113291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.113441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.113458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.113538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.113554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.113791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.113809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.113991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.114100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.114387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.114559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.114684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.114867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.114885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.115904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.115920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.116913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.116931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.117097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.117114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.117330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.117348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.117506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.117523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.117809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.117989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.118145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.118162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.118386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.118404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.118500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.874 [2024-12-09 05:25:15.118515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.874 qpair failed and we were unable to recover it. 00:30:32.874 [2024-12-09 05:25:15.118752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.118770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.118928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.118945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.119180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.119201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.119375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.119392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.119542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.119559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.119705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.119722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.119830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.119846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.120943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.120960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.121954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.121972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.122137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.122306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.122504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.122736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.122843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.122987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.123005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.123244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.123262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.123356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.123372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.123588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.875 [2024-12-09 05:25:15.123606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.875 qpair failed and we were unable to recover it. 00:30:32.875 [2024-12-09 05:25:15.123846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.123863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.123981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.123997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.124218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.124236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.124424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.124442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.124602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.124620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.124764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.124781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.124966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.124983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.125251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.125268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.125457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.125474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.125712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.125729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.125955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.125972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.126215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.126233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.126449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.126466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.126631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.126648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.126864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.126881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.127853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.127870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.128043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.128060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.128228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.128246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.128348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.128365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.128528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.128545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.128789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.128806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.129062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.129231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.129249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.129346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.129363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.876 [2024-12-09 05:25:15.129521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.876 [2024-12-09 05:25:15.129538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.876 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.129627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.129644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.129788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.132450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.132469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.132634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.132650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.132901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.132924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.133151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.133174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.133356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.133378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.133541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.133563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.133788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.134043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.134066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.134233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.134257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.134707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.134734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.134921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.134945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.135055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.135077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.135270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.135294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.135469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.135662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.135684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.135933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.135955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.136062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.136084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.136313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.136337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.136596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.136620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.136728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.136960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.136984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.137176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.137199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.137385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.137408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.137575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.137598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.137751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.137774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.137889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.137911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.138087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.877 [2024-12-09 05:25:15.138111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.877 qpair failed and we were unable to recover it. 00:30:32.877 [2024-12-09 05:25:15.138282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.138305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.138483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.138505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.138611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.138635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.138907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.139095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.139118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.139400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.139665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.139687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.139865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.139887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.139997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.140019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.140271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.140299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.140470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.140495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.140684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.140707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.140995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.141027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.141244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.141277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.141459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.141490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.141734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.141766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.141980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.142012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.142254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.142297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.142484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.142515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.142703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.142736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.142925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.142955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.143224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.143257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.143473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.143504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.143635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.143665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.143869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.144136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.144167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.144360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.144392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.144580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.144611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.144754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.878 [2024-12-09 05:25:15.144785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.878 qpair failed and we were unable to recover it. 00:30:32.878 [2024-12-09 05:25:15.145048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.145079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.145200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.145242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.145506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.145536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.145756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.145786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.146028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.146058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.146235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.146268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.146460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.146491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.146782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.146813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.147004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.147235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.147267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.147452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.147484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.147675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.147706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.147966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.147997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.148137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.148168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.148448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.148480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.148658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.148688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.148951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.149269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.149301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.149571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.149602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.149867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.879 [2024-12-09 05:25:15.149898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.879 qpair failed and we were unable to recover it. 00:30:32.879 [2024-12-09 05:25:15.150041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.150072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.150281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.150314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.150447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.150478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.150657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.150687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.150863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.150895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.151018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.151050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.151324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.151365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.151653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.151693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.151984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.152024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.152242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.152283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.152551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.152592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.152808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.152849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.153054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.153093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.153392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.153434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.153674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.153715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.153981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.154021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.154327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.154369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.154582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.154754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.154793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.154999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.155040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.155261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.155303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.155594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.155635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.155901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.155941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.156099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.156140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.156436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.156479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.156746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.156786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.156938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.156984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.157126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.157167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.880 qpair failed and we were unable to recover it. 00:30:32.880 [2024-12-09 05:25:15.157375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.880 [2024-12-09 05:25:15.157423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.157639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.157679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.157951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.157991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.158264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.158307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.158530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.158570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.158847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.158886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.159084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.159123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.159343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.159385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.159587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.159629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.159865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.160096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.160136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.160378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.160420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.160716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.160758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.161032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.161073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.161287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.161328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.161544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.161906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.161949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.162173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.162226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.162560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.162605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.162803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.162847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.162990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.163031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.163182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.163238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.163501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.163541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.163775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.163816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.164049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.164188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.164239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.164501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.164542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.164804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.164851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.165039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.165079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.165367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.165410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.165676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.165960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.166000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.166206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.166259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.166486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.166527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.166853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.167135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.167176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.167468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.167508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.167746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.168015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.168056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.168252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.168294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.168444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.168483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.881 qpair failed and we were unable to recover it. 00:30:32.881 [2024-12-09 05:25:15.168750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.881 [2024-12-09 05:25:15.168790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.169059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.169100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.169294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.169336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.169534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.169574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.169784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.169824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.170039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.170078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.170276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.170317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.170602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.170642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.170933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.170972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.171231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.171272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.171508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.171548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.171800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.171994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.172034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.172284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.172332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.172637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.172677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.172936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.172976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.173196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.173417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.173457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.173664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.173703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.173942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.174154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.174194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.174398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.174438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.174633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.174674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.174863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.174903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.175056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.175096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.175303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.175345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.175561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.175601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.175893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.175934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.176078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.176118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.176377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.176418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.176626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.176666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.176879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.176919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.177221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.177264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.177526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.177566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.177844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.177884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.178027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.178067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.178331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.178372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.178591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.178631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.178901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.178940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.179236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.179278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.179571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.179611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.179778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.179819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.180137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.180177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.180449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.180489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.180746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.180786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.181087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.882 [2024-12-09 05:25:15.181381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.882 [2024-12-09 05:25:15.181423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.882 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.181704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.181744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.182026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.182066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.182294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.182336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.182568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.182608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.182822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.182863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.183176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.183229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.183453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.183493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.183778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.183818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.184046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.184086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.184293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.184334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.184545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.184587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.184719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.184760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.184997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.185037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.185183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.185236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.185497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.185538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.185667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.185712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.185843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.185882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.186130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.186330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.186371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.186597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.186636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.186852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.186893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.187041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.187082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.187278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.187319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.187624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.187665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.187857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.187898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.188053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.188093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.188289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.188330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.188470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.188510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.188770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.188810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.189073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.189118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.189328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.189369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.189545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.189826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.189866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.190084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.190124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.190268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.190319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.190519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.190558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.190840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.190879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.191138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.191403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.191444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.191676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.191716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.191919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.191959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.192119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.192159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.192385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.192428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.192694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.192733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.193037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.193077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.193227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.193269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.193521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.193561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.193859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.194168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.194221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.194508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.883 [2024-12-09 05:25:15.194548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.883 qpair failed and we were unable to recover it. 00:30:32.883 [2024-12-09 05:25:15.194828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.194867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.195019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.195060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.195340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.195381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.195592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.195631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.195859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.195900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.196097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.196136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.196295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.196336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.196543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.196583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.196777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.196817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.197093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.197133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.197355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.197396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.197541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.197588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.197797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.197837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.198119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.198158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.198379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.198420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.198631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.198672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.198935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.198974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.199176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.199229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.199438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.199479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.199778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.200006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.200045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.200254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.200523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.200563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.200824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.200864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.201115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.201156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.201368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.201409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.201691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.201731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.201943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.201984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.202266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.202569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.202608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.202883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.202923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.203188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.203239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.203499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.203539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.203734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.203773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.204064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.204104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.204320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.204361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.204582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.204622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.204813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.204854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.205113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.205153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.205323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.205364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.205631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.205672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.205881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.205922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.206164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.206204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.206431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.206472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.206705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.206751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.207037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.207076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.207360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.207593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.207634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.207893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.207934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.208193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.208247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.208551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.208592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.884 [2024-12-09 05:25:15.208803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.884 [2024-12-09 05:25:15.208842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.884 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.209096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.209178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.209493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.209550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.209857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.209906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.210233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.210288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.210571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.210632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.210938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.210988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.211140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.211189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.211477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.211527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.211675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.211723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.212013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.212062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.212302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.212353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.212580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.212628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.212842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.212891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.213182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.213255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.213549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.213597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.213760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.213808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.214022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.214069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.214274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.214323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.214637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.214685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.215022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.215069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.215283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.215325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.215629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.215669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.215951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.215990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.216200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.216253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.216544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.216584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.216817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.216856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.217006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.217046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.217280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.217324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.217558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.217610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.217795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.217842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.218094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.218351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.218413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.218588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.218639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.218876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.218927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.219095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.219142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.219324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.219384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.219549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.219598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.219804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.219855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.220098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.220147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.220437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.220681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.220759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.221040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.221085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.221376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.221419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.885 [2024-12-09 05:25:15.221631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.885 [2024-12-09 05:25:15.221673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.885 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.221863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.221904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.222059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.222099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.222266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.222309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.222598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.222638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.222839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.222879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.223028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.223068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.223324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.223366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.223653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.223693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.223934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.224196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.224248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.224484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.224525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.224661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.224701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.224986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.225027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.225231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.225273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.225469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.225510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.225662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.225918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.225958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.226084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.226125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.226359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.226401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.226684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.226725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.226933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.226974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.227176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.227231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.227430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.227470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.227615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.227663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.227876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.227917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.228177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.228229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.228381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.228421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.228681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.228721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.228922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.228962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.229154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.229194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.229498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.229539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.229754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.229794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.229987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.230028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.230180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.230238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.230416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.230467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.230728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.230772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.231051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.231091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.231305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.231485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.231526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.231787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.231826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.232105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.232148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.232442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.232485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.232757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.232932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.232972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.233273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.233314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.233530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.233571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.233807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.233847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.234123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.234163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.886 [2024-12-09 05:25:15.234323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.886 [2024-12-09 05:25:15.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.886 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.234656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.234697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.234851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.234898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.235103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.235143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.235313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.235354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.235636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.235676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.235821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.235861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.236068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.236108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.236312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.236353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.236498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.236537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.236803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.236993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.237032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.237289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.237329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.237530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.237568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.237851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.238175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.238240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.238511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.238552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.238756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.238795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.238999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.239040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.239299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.239341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.239602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.239642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.239845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.239885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.240122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.240162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.240385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.240427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.240639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.240681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.240889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.240928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.241219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.241262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.241457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.241497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.241691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.241731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.241993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.242039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.242267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.242309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.242503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.242543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.242803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.242843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.243049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.243089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.243302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.243344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.243551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.243592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.243857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.243897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.244102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.244141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.244439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.244481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.244620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.244662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.244893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.245166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.887 [2024-12-09 05:25:15.245206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.887 qpair failed and we were unable to recover it. 00:30:32.887 [2024-12-09 05:25:15.245434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.245475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.245680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.245722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.245999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.246040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.246337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.246381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.246590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.246631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.246888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.246929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.247120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.247161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.247463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.247505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.247762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.247803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.248061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.248102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.248384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.248426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.248638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.248679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.248877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.248917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.249042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.249082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.249284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.249325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.249546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.249587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.249740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.249780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.250022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.250322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.250582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.250623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.250907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.250947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.251082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.251123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.251279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.251319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.251624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.251665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.251876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.251917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.252048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.252087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.252292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.252332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.252562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.252601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.252870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.252948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.253120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.253507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.253551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.253750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.253790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.254021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.254061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.254341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.254385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.254599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.254638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.254799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.254840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.254985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.255026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.255241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.255283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.255427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.255468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.255662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.255702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.255935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.256224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.256487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.256528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.256803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.256843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.257033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.257073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.257278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.257320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.257462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.257712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.257752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.257895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.257934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.258155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.258195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.258419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.258460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.258717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.258756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.258895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.258935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.259189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.259352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.259393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.888 [2024-12-09 05:25:15.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.888 [2024-12-09 05:25:15.259666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.888 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.259926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.259966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.260251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.260293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.260518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.260558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.260790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.260831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.261031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.261071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.261284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.261326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.261538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.261579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.262107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.262147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.262409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.262451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.262627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.262863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.262903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.263107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.263359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.263400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.263707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.263747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.263983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.264024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.264382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.264643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.264683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.264993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.265033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.265292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.265333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.265597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.265637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.265839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.265879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.266162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.266203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.266424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.266732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.266772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.267056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.267097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.267387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.267429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.267566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.267606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.267806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.267847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.268130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.268170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.268441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.268482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.268698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.268739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.269018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.269057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.269269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.269311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.269578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.269619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.269782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.269991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.270031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.270238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.270280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.270561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.270601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.270801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.270841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.271053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.271092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.271296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.271338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.271620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.271661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.271957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.271996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.272249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.272291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.272561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.272602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.272884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.272924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.273201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.273253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.273406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.273446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.273652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.273692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.273888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.274129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.274170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.274322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.274369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.274649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.274689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.274970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.275010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.275315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.275513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.275553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.275783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.275823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.276102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.276142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.276360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.276401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.276626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.276666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.889 qpair failed and we were unable to recover it. 00:30:32.889 [2024-12-09 05:25:15.276927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.889 [2024-12-09 05:25:15.276966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.277177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.277225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.277432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.277471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.277734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.277774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.277968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.278008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.278228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.278270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.278557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.278597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.278899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.278938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.279147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.279187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.279458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.279498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.279649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.279689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.279926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.280199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.280273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.280552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.280592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.280885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.280925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.281200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.281252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.281446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.281486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.281636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.281676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.281888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.281929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.282238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.282280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.282425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.282465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.282658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.282697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.282979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.283019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.283224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.283425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.283466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.283750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.283789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.283935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.283975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.284183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.284234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.284545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.284585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.284824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.284864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.285093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.285134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.285425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.285679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.285719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.285923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.285963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.286242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.286284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.286516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.286557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.286823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.286863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.287081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.287121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.287314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.287355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.287552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.287591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.287848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.287888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.288168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.288237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.288500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.288540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.288699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.288739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.288996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.289036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.289303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.289344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.289603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.289643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.289878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.289919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.290072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.290112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.290305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.290347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.290629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.290919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.291061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.291101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.291359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.291401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.291595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.291921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.291962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.292238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.292279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.292537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.292577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.292848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.292889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.293169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.293217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.293369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.293409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.293699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.293739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.294021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.294061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.294317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.294358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.294563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.294603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.294814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.294854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.295077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.890 [2024-12-09 05:25:15.295117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.890 qpair failed and we were unable to recover it. 00:30:32.890 [2024-12-09 05:25:15.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.295395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.295623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.295662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.295887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.295927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.296243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.296495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.296541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.296823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.296862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.297084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.297124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.297355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.297397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.297593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.297632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.297907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.297948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.298100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.298141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.298432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.298474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.298669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.298709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.298965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.299006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.299252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.299293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.299605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.299816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.299856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.300012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.300051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.300317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.300359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.300572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.300613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.300802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.300841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.301135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.301176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.301452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.301493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.301759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.301799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.302062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.302101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.302377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.302418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.302577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.302616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.302920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.302960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.303161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.303201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.303489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.303529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.303821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.303861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.303988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:32.891 [2024-12-09 05:25:15.304184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.891 [2024-12-09 05:25:15.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:32.891 qpair failed and we were unable to recover it. 00:30:33.165 [2024-12-09 05:25:15.304536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.165 [2024-12-09 05:25:15.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.165 qpair failed and we were unable to recover it. 00:30:33.165 [2024-12-09 05:25:15.304766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.165 [2024-12-09 05:25:15.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.165 qpair failed and we were unable to recover it. 00:30:33.165 [2024-12-09 05:25:15.305062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.165 [2024-12-09 05:25:15.305102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.165 qpair failed and we were unable to recover it. 00:30:33.165 [2024-12-09 05:25:15.305405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.165 [2024-12-09 05:25:15.305447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.165 qpair failed and we were unable to recover it. 00:30:33.165 [2024-12-09 05:25:15.305672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.305712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.305867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.305907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.306111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.306151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.306417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.306458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.306693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.306733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.306942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.306981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.307171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.307218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.307455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.307501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.307783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.307823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.308086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.308127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.308358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.308400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.308628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.308667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.308820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.308860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.309068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.309108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.309389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.309430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.309566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.309606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.309808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.309848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.310043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.310083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.310365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.310406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.310684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.310725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.311003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.311042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.311262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.311303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.311515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.311750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.311789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.312058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.312099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.312305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.312346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.312651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.312691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.312905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.312944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.313141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.313181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.313475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.313770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.313809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.314023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.314062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.314348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.314390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.314603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.314643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.314932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.314972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.315120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.315160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.315476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.166 [2024-12-09 05:25:15.315517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.166 qpair failed and we were unable to recover it. 00:30:33.166 [2024-12-09 05:25:15.315795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.315833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.316060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.316101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.316378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.316419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.316648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.316688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.316887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.316927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.317195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.317248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.317455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.317495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.317699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.317739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.317946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.317986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.318144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.318184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.318410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.318457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.318607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.318648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.318878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.318917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.319054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.319094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.319376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.319419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.319615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.319655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.319912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.319951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.320227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.320269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.320492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.320531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.320812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.320853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.321124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.321413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.321454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.321716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.321756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.321971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.322232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.322273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.322478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.322518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.322776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.322817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.323012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.323052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.323256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.323298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.323597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.323854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.323893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.324153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.324193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.324407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.324447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.324648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.324688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.324952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.324992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.325280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.325321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.325515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.325554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.325819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.167 [2024-12-09 05:25:15.325860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.167 qpair failed and we were unable to recover it. 00:30:33.167 [2024-12-09 05:25:15.326051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.326090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.326363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.326404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.326608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.326649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.326953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.326993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.327273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.327314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.327516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.327557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.327825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.327864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.328185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.328454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.328495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.328688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.328728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.328935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.328975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.329233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.329274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.329481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.329527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.329684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.330005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.330045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.330266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.330307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.330511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.330550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.330702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.330743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.331028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.331068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.331325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.331366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.331645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.331685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.331964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.332004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.332325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.332531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.332571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.332804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.332843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.332994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.333033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.333180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.333228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.333487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.333526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.333783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.333822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.333957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.333997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.334189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.334237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.334452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.334492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.334772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.334811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.335043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.335083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.335352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.335394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.335646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.335840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.335880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.336164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.168 [2024-12-09 05:25:15.336204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.168 qpair failed and we were unable to recover it. 00:30:33.168 [2024-12-09 05:25:15.336493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.336532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.336862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.337120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.337405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.337603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.337643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.337948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.337989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.338258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.338299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.338610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.338650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.338878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.339130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.339170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.339392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.339433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.339630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.339670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.339955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.339996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.340244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.340286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.340569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.340615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.340895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.340935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.341225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.341267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.341472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.341513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.341729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.341768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.341981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.342020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.342253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.342295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.342599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.342639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.342847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.342886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.343144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.343184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.343403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.343443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.343726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.343765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.344044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.344084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.344363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.344405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.344617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.344657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.344941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.344981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.345188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.345235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.345431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.345471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.345698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.345737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.345997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.346037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.346233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.346275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.346469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.169 qpair failed and we were unable to recover it. 00:30:33.169 [2024-12-09 05:25:15.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.169 [2024-12-09 05:25:15.346808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.347086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.347126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.347337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.347377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.347659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.347699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.347893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.347933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.348251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.348293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.348605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.348646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.348906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.348945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.349148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.349404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.349444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.349679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.349718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.349977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.350017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.350222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.350264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.350544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.350583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.350903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.351185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.351235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.351447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.351486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.351763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.351803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.352001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.352047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.352320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.352362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.352580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.352619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.352904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.352944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.353227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.353269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.353470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.353510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.353713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.353752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.353955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.353995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.354276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.354318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.354522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.354562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.354793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.354832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.355036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.355076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.355334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.355375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.355635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.170 [2024-12-09 05:25:15.355884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.170 [2024-12-09 05:25:15.355924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.170 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.356162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.356202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.356418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.356458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.356588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.356628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.356908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.356948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.357176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.357227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.357388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.357427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.357712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.357753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.358022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.358062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.358344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.358386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.358596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.358847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.358886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.359093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.359131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.359350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.359390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.359646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.359683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.359834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.359873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.360097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.360136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.360349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.360388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.360593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.360632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.360923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.360961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.361235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.361274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.361421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.361459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.361665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.361703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.361930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.361968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.362166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.362205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.362486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.362698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.362742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.362896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.362935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.363198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.363246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.363440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.363479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.363784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.363822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.364020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.364057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.364336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.364376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.364581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.364620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.364867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.365153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.365191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.365453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.365655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.365693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.171 [2024-12-09 05:25:15.365838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.171 [2024-12-09 05:25:15.365877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.171 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.366083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.366121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.366326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.366366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.366576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.366616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.366878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.366919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.367200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.367248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.367477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.367517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.367725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.367767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.368043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.368085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.368288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.368330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.368610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.368649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.368936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.368978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.369221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.369264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.369546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.369586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.369844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.369884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.370088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.370138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.370405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.370663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.370703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.370955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.371162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.371203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.371421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.371466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.371739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.371779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.372012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.372051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.372242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.372283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.372544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.372585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.372867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.372907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.373175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.373223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.373454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.373783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.374111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.374151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.374416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.374457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.374668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.374707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.374908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.375186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.375235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.375433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.375473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.375752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.375792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.376080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.376120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.376378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.376420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.172 qpair failed and we were unable to recover it. 00:30:33.172 [2024-12-09 05:25:15.376706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.172 [2024-12-09 05:25:15.376746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.376895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.376934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.377190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.377239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.377561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.377601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.377755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.377795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.378032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.378071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.378283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.378324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.378531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.378571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.378774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.378814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.378956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.378997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.379141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.379341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.379381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.379655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.379939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.379978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.380185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.380247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.380567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.380607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.380823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.380863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.381149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.381190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.381410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.381451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.381732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.382101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.382312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.382354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.382642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.382682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.382878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.382917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.383130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.383170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.383449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.383489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.383780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.383821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.384095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.384136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.384341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.384383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.384591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.384630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.384830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.384875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.385155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.385195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.385418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.385459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.385668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.385707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.385916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.385956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.386099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.386139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.173 [2024-12-09 05:25:15.386363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.173 [2024-12-09 05:25:15.386405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.173 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.386679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.386937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.386976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.387242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.387283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.387423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.387462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.387767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.387808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.387953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.387992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.388235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.388275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.388431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.388471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.388676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.388716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.388937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.388977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.389282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.389322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.389555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.389594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.389731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.389771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.390031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.390070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.390344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.390385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.390611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.390650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.390795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.390834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.391037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.391077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.391269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.391310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.391547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.391586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.391782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.391861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.392146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.392364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.392407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.392624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.392664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.392926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.392967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.393198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.393252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.393482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.393522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.393663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.393703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.393910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.393951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.394099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.394350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.174 qpair failed and we were unable to recover it. 00:30:33.174 [2024-12-09 05:25:15.394653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.174 [2024-12-09 05:25:15.394698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.394864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.395168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.395220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.395434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.395474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.395679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.395718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.395909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.395957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.396245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.396287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.396540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.396691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.396731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.397017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.397056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.397306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.397348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.397590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.397630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.397844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.397884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.398038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.398078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.398356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.398402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.398677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.398717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.398988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.399029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.399319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.399360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.399588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.399627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.399919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.399959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.400195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.400248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.400452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.400491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.400710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.400750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.401030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.401071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.401358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.401400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.401615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.401890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.401931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.402070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.402387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.402429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.402658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.402709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.402998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.403039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.175 qpair failed and we were unable to recover it. 00:30:33.175 [2024-12-09 05:25:15.403251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.175 [2024-12-09 05:25:15.403292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.403553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.403592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.403794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.403834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.404135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.404175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.404469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.404509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.404714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.404753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.405015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.405055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.405258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.405299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.405516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.405555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.405847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.405886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.406094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.406133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.406283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.406638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.406678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.406890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.406929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.407217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.407259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.407408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.407448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.407745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.407784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.408067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.408107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.408385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.408427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.408635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.408675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.408820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.408859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.409088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.409127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.409361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.409403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.409609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.409649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.409848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.409887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.410173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.410528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.410567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.410773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.410813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.411093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.411133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.411405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.411447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.411731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.412000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.412040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.412340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.412382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.412667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.412707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.413010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.413266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.413307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.413497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.413536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.413824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.176 [2024-12-09 05:25:15.413904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.176 qpair failed and we were unable to recover it. 00:30:33.176 [2024-12-09 05:25:15.414236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.414301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.414634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.414691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.414995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.415043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.415262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.415314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.415530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.415578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.415870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.415920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.416069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.416118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.416360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.416419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.416696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.416745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.417010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.417172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.417234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.417440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.417497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.417726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.417923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.417971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.418233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.418284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.418579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.418627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.418860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.418909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.419203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.419266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.419548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.419597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.419817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.419873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.420180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.420243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.420453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.420502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.420792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.420840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.421085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.421138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.421365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.421414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.421707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.421756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.421891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.421939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.422230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.422289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.422462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.422510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.422711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.422759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.422991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.423048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.423266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.423318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.423623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.423673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.423834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.423891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.424177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.424242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.177 [2024-12-09 05:25:15.424517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.177 [2024-12-09 05:25:15.424567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.177 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.424837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.424885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.425225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.425275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.425495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.425551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.425827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.425875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.426090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.426146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.426431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.426488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.426717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.426765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.426928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.426975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.427137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.427186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.427486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.427543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.427851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.427900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.428113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.428167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.428514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.428564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.428790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.428838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.429116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.429167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.429467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.429516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.429808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.429856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.430126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.430184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.430470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.430519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.430806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.430855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.431122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.431172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.431492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.431542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.431758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.432022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.432069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.432287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.432352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.432621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.432670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.432951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.433000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.433287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.433563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.433611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.433847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.433897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.434057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.434105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.434417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.434468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.434780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.434829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.435017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.435066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.435291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.435343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.435637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.435687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.178 [2024-12-09 05:25:15.435992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.178 [2024-12-09 05:25:15.436042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.178 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.436369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.436424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.436639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.436695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.437003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.437063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.437295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.437580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.437629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.437857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.437910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.438202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.438268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.438582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.438639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.438863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.438914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.439229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.439284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.439460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.439508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.439795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.439844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.440127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.440175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.440551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.440603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.440932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.440991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.441232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.441284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.441589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.441742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.441798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.442012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.442070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.442369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.442428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.442681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.442730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.442973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.443021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.443239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.443302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.443613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.443662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.443933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.443982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.444241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.444292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.444599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.444651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.444956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.445006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.445172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.445241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.445520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.445569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.445732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.445781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.446033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.446082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.446308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.446359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.446649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.446700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.447004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.179 [2024-12-09 05:25:15.447054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.179 qpair failed and we were unable to recover it. 00:30:33.179 [2024-12-09 05:25:15.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.447407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.447655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.447710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.447929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.447978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.448196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.448264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.448442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.448489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.448704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.448752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.449058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.449106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.449457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.449673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.449721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.449992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.450040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.450309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.450357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.450506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.450554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.450759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.450817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.451045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.451092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.451294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.451337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.451588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.451726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.451907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.451948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.452085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.452125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.452316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.452518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.452559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.452700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.452742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.453024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.453063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.453280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.453321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.453516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.453556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.453856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.453897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.454043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.454084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.454344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.454385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.454578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.454619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.454821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.454861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.455067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.455107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.455303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.455629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.455670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.455956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.180 [2024-12-09 05:25:15.455996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.180 qpair failed and we were unable to recover it. 00:30:33.180 [2024-12-09 05:25:15.456221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.456263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.456523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.456563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.456766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.456806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.457093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.457133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.457349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.457391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.457642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.457682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.457959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.457999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.458231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.458274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.458422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.458462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.458711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.458751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.459035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.459075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.459347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.459390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.459874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.459915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.460071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.460111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.460346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.460387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.460666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.460707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.461010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.461049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.461318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.461365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.461573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.461614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.461885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.461926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.462145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.462185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.462438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.462479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.462718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.462759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.463021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.463061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.463270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.463578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.463618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.463884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.463924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.464204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.464262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.464404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.464712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.464752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.464956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.464996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.465139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.465179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.465417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.465457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.465778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.465985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.181 [2024-12-09 05:25:15.466025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.181 qpair failed and we were unable to recover it. 00:30:33.181 [2024-12-09 05:25:15.466173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.466221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.466448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.466488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.466767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.466808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.467110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.467150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.467481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.467524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.467733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.467773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.467979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.468019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.468303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.468345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.468492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.468533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.468683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.468724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.468950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.468991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.469251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.469293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.469552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.469593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.469892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.469931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.470147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.470188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.470387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.470429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.470693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.470732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.471001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.471042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.471309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.471352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.471630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.471670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.471926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.471966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.472163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.472448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.472495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.472721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.472762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.472987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.473027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.473310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.473352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.473587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.473627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.473818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.473858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.474050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.474089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.474352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.474394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.474611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.474652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.474853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.474893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.475121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.475161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.475326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.475376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.475614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.475662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.475956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.476004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.476247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.476298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.182 [2024-12-09 05:25:15.476470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.182 [2024-12-09 05:25:15.476513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.182 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.476752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.476798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.477073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.477120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.477401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.477444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.477659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.477699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.477971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.478011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.478234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.478278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.478563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.478603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.478864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.479148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.479197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.479459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.479504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.479698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.479738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.480044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.480087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.480248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.480291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.480494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.480800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.480841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.481132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.481400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.481442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.481655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.481699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.481988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.482030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.482269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.482310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.482454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.482499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.482762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.482812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.483122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.483172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.483432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.483482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.483643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.483698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.483897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.483945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.484181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.484239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.484521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.484561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.484769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.484809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.485019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.485059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.485263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.485305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.485586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.485626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.485816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.485856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.486004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.486044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.486255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.486304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.486572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.486620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.486984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.183 [2024-12-09 05:25:15.487184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.183 [2024-12-09 05:25:15.487247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.183 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.487539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.487581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.487843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.487884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.488170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.488228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.488464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.488505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.488764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.488803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.489035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.489307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.489348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.489650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.489691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.489978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.490019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.490253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.490296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.490552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.490592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.490809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.490849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.491065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.491317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.491359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.491504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.491544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.491824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.491864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.492150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.492189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.492346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.492386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.492531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.492572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.492728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.492985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.493026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.493585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.493625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.493762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.493802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.494035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.494075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.494333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.494374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.494664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.494710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.494993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.495247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.495289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.495486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.495526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.495665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.495705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.495914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.495954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.496242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.184 [2024-12-09 05:25:15.496283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.184 qpair failed and we were unable to recover it. 00:30:33.184 [2024-12-09 05:25:15.496438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.496478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.496780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.497076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.497295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.497335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.497606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.497646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.497925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.497965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.498291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.498454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.498495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.498729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.498769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.498992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.499032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.499295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.499338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.499545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.499586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.499740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.499780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.499985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.500025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.500254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.500295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.500556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.500597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.500891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.500930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.501154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.501195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.501441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.501482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.501695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.501735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.502000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.502041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.502172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.502223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.502520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.502561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.502711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.502751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.503057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.503098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.503306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.503348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.503629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.503670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.503828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.503869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.504081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.504120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.504425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.504467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.504621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.504662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.504872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.504912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.505172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.505477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.505524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.505844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.505884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.506168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.506215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.506499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.506539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.506751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.185 [2024-12-09 05:25:15.506792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.185 qpair failed and we were unable to recover it. 00:30:33.185 [2024-12-09 05:25:15.506947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.506987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.507255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.507298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.507492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.507533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.507783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.508054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.508094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.508378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.508420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.508634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.508675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.508884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.508924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.509189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.509239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.509570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.509855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.509895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.510038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.510079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.510337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.510377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.510662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.510703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.510930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.510970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.511177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.511234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.511393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.511433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.511684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.511724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.511934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.511974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.512248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.512290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.512488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.512530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.512735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.512775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.513091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.513132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.513282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.513471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.513510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.513790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.513830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.514041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.514081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.514341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.514382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.514711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.514991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.515032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.515292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.515336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.515487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.515527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.515759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.515799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.516003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.516044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.516332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.516374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.516573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.516619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.186 [2024-12-09 05:25:15.516905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.186 [2024-12-09 05:25:15.516945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.186 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.517240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.517281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.517498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.517770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.517810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.518065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.518105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.518247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.518289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.518499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.518539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.518751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.518791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.518983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.519024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.519280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.519487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.519528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.519729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.519770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.519921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.519960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.520162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.520202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.520417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.520458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.520595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.520636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.520831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.520871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.521006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.521316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.521357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.521638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.521678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.521955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.521994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.522280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.522321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.522517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.522557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.522824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.522864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.523057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.523097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.523300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.523343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.523653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.523694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.523977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.524017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.524241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.524282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.524493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.524533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.524792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.524832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.525024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.525063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.525200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.525252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.525534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.525574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.525862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.525902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.526166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.526206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.526500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.526540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.526744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.526784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.187 qpair failed and we were unable to recover it. 00:30:33.187 [2024-12-09 05:25:15.527063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.187 [2024-12-09 05:25:15.527103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.527339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.527389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.527687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.527727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.527867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.527907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.528197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.528247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.528519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.528559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.528796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.528836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.529062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.529103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.529334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.529375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.529678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.529718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.529907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.529947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.530160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.530200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.530408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.530449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.530710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.530750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.530883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.530924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.531221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.531263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.531545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.531585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.531800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.531840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.531996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.532036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.532247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.532294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.532580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.532619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.532884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.532925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.533121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.533162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.533391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.533432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.533689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.533886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.533927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.534206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.534256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.534490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.534530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.534818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.534859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.535144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.535184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.535422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.535463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.535722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.535764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.535972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.536012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.536233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.536279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.536474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.536515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.536733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.536773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.537001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.537041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.537299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.188 [2024-12-09 05:25:15.537341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.188 qpair failed and we were unable to recover it. 00:30:33.188 [2024-12-09 05:25:15.537549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.537868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.537907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.538134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.538174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.538348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.538389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.538681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.538721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.538872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.538912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.539122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.539163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.539470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.539511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.539745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.539785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.540021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.540061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.540283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.540325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.540607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.540647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.540853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.540892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.541159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.541200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.541467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.541508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.541789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.541829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.542084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.542124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.542287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.542343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.542551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.542591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.542805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.542845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.543058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.543099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.543391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.543432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.543702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.544013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.544275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.544549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.544590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.544878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.545137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.545176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.545463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.545503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.545719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.545760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.546018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.189 [2024-12-09 05:25:15.546064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.189 qpair failed and we were unable to recover it. 00:30:33.189 [2024-12-09 05:25:15.546260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.546301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.546456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.546777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.546817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.547074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.547114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.547278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.547320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.547516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.547556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.547816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.547855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.548139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.548180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.548468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.548511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.548700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.548740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.549023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.549064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.549257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.549299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.549509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.549549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.549770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.549811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.550077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.550118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.550388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.550430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.550667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.551010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.551051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.551373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.551654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.551695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.551901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.551941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.552146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.552186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.552426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.552468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.552771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.552811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.553025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.553259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.553300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.553523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.553563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.553824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.553864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.554144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.554185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.554350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.554391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.554586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.554625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.554848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.554888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.555094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.555134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.555425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.555466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.555727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.555767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.556080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.556120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.556273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.556315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.190 [2024-12-09 05:25:15.556528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.190 [2024-12-09 05:25:15.556568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.190 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.556826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.556867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.557105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.557151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.557419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.557460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.557660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.557701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.557956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.557996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.558197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.558246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.558555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.558595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.558821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.558861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.559138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.559179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.559446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.559486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.559744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.559784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.560093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.560133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.560360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.560686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.560726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.560928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.560968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.561239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.561281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.561567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.561609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.561818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.561859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.562075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.562115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.562313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.562355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.562514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.562554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.562836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.562877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.563160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.563200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.563357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.563674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.563715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.563992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.564032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.564250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.564295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.564615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.564655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.564873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.564914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.565117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.565157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.565354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.565572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.565613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.565870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.565910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.566188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.566239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.566433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.566473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.566671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.566710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.566863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.566903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.191 [2024-12-09 05:25:15.567221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.191 [2024-12-09 05:25:15.567263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.191 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.567473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.567788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.567828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.568088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.568365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.568419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.568718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.568759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.568985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.569026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.569240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.569281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.569582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.569622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.569905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.569946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.570158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.570198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.570421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.570462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.570761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.570803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.571113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.571397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.571438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.571595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.571636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.571778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.571818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.572121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.572161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.572385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.572429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.572593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.572632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.572857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.572897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.573229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.573271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.573484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.573524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.573802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.573842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.574124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.574165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.574463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.574504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.574666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.574706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.574937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.574976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.575104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.575144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.575391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.575432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.575644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.575684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.575825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.575866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.576124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.576164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.576390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.576439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.576650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.576691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.576903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.576943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.577149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.577189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.577426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.577468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.192 qpair failed and we were unable to recover it. 00:30:33.192 [2024-12-09 05:25:15.577661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.192 [2024-12-09 05:25:15.577701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.577913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.577953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.578219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.578261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.578558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.578599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.578811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.578851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.579064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.579104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.579342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.579391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.579602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.579642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.579862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.579902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.580185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.580247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.580469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.580509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.580718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.580759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.581080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.581353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.581395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.581604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.581644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.581871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.581911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.582190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.582239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.582501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.582542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.582823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.582863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.583132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.583173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.583473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.583514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.583704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.583744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.583978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.584017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.584280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.584323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.584538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.584578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.584719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.584961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.585002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.585285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.585326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.585562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.585602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.585831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.585872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.586085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.586126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.586418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.586459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.586665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.586706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.193 [2024-12-09 05:25:15.586932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.193 [2024-12-09 05:25:15.586973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.193 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.587133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.587173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.587311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.587352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.587487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.587529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.587731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.587771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.587982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.588022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.588166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.588222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.588500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.588540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.588795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.588835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.589113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.589154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.589404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.589447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.589665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.589705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.589859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.589899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.590253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.590466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.590506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.590718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.590759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.591024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.591063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.591350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.591391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.591601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.591858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.591897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.592041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.592081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.592288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.592329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.592524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.592564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.592774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.592815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.593072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.593113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.593260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.593303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.593515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.593555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.593820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.593861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.593984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.594025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.594287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.594328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.594652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.594860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.594899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.595156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.595197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.595492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.595532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.595791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.595832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.596028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.596068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.596261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.596303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.596442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.194 [2024-12-09 05:25:15.596483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.194 qpair failed and we were unable to recover it. 00:30:33.194 [2024-12-09 05:25:15.596687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.596728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.596948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.596988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.597221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.597275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.597475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.597516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.597725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.597765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.598026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.598067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.598259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.598300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.598498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.598539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.598847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.598887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.599185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.599236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.599495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.599535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.599732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.599772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.599903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.600199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.600249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.600391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.600431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.600650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.600696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.600955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.600995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.601260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.601304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.601517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.601557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.601827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.601867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.602057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.602097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.602308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.602349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.602554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.602721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.602762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.602964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.603004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.603218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.603260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.603474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.603516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.603777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.603816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.604011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.604052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.604204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.604253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.604449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.604489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.604737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.604945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.604986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.605205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.605268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.605479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.605519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.605667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.605707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.605849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.605889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.606185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.195 [2024-12-09 05:25:15.606252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.195 qpair failed and we were unable to recover it. 00:30:33.195 [2024-12-09 05:25:15.606513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.606554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.606789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.606829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.606965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.607005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.607200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.607251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.607463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.607504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.607692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.607732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.607945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.607985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.608244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.608285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.608433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.608474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.608687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.608727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.608936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.608977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.609109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.609149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.609371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.609413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.609608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.609649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.609930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.609971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.610183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.610233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.610362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.610402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.610559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.610606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.610767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.610807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.611025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.611065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.611316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.611357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.611650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.611689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.611948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.611988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.612198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.612249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.612508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.612547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.612814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.612854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.613058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.613099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.613307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.613350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.613612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.613653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.613884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.613923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.614252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.614295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.614510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.614552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.614764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.614804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.614994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.615035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.615250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.615293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.615501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.615541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.615675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.615715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.615883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.196 [2024-12-09 05:25:15.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.196 qpair failed and we were unable to recover it. 00:30:33.196 [2024-12-09 05:25:15.616144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.616184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.616345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.616385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.616576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.616616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.616830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.616871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.617130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.617170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.617428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.617469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.617624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.197 [2024-12-09 05:25:15.617886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.197 [2024-12-09 05:25:15.617926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.197 qpair failed and we were unable to recover it. 00:30:33.471 [2024-12-09 05:25:15.618132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.471 [2024-12-09 05:25:15.618172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.471 qpair failed and we were unable to recover it. 00:30:33.471 [2024-12-09 05:25:15.618346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.471 [2024-12-09 05:25:15.618387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.618550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.618590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.618850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.618889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.619062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.619102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.619363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.619405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.619593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.619634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.619824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.619863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.620056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.620096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.620288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.620329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.620541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.620581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.620706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.620759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.620962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.621001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.621315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.621362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.621508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.621549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.621774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.621816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.622013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.622054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.622192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.622267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.622473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.622514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.622717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.622757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.622949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.622989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.623142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.623183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.623481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.623522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.623666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.623706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.623908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.623950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.624151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.624192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.624357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.624398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.624678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.624720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.624950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.624990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.625183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.625517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.625558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.625792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.625832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.626036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.626076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.626306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.626348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.626563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.626604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.626879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.626919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.627062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.627102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.472 [2024-12-09 05:25:15.627373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.472 [2024-12-09 05:25:15.627415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.472 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.627646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.627686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.627843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.627883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.628094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.628135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.628424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.628465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.628775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.629102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.629141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.629408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.629449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.629731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.629771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.630052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.630092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.630311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.630354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.630559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.630600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.630864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.630904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.631188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.631235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.631394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.631441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.631568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.631608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.631821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.631862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.632131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.632348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.632389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.632619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.632659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.632919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.632959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.633183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.633232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.633368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.633408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.633643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.633684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.633945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.633985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.634235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.634283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.634551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.634591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.634780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.634820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.635103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.635144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.635460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.635502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.635813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.636074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.636113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.636256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.636516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.636556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.636828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.636867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.637062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.637102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.637313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.637353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.637643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.637683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.473 [2024-12-09 05:25:15.637840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.473 [2024-12-09 05:25:15.637881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.473 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.638180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.638352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.638395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.638671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.638712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.638969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.639009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.639300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.639342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.639613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.639653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.639811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.639851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.640056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.640096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.640288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.640330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.640589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.640628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.640930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.640970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.641265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.641306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.641539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.641578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.641828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.642086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.642127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.642412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.642460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.642763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.642803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.642993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.643034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.643256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.643297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.643504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.643544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.643753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.643793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.644076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.644116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.644385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.644427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.644710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.644749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.644939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.644979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.645177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.645226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.645420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.645461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.645676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.645716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.646000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.646039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.646383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.646426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.646688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.646728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.646955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.646996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.647278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.647321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.647523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.647562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.647846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.647886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.648098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.648139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.648409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.648451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.474 [2024-12-09 05:25:15.648647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.474 [2024-12-09 05:25:15.648688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.474 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.648807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.648854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.649090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.649302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.649344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.649576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.649616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.649903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.649945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.650172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.650224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.650477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.650820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.651141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.651181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.651396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.651436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.651719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.651759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.651910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.651950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.652153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.652192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.652461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.652502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.652704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.652745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.653055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.653332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.653374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.653632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.653679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.653804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.653843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.654149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.654190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.654473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.654516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.654815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.655004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.655043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.655243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.655284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.655508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.655548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.655775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.655976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.656016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.656160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.656511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.656552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.656745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.656785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.657011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.657051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.657247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.657289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.657451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.657492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.657701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.657741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.657949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.657989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.658231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.658281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.658478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.658518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.475 [2024-12-09 05:25:15.658677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.475 [2024-12-09 05:25:15.658717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.475 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.658980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.659021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.659241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.659288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.659484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.659525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.659677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.659717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.659911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.660200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.660388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.660429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.660575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.660615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.660908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.660949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.661233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.661275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.661538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.661579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.661837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.661878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.662084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.662124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.662340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.662382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.662614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.662655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.662916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.662955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.663221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.663279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.663446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.663487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.663753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.663928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.664166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.664227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.664484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.664771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.664811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.664952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.664992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.665121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.665162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.665388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.665429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.665643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.665683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.665809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.665849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.666065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.666363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.666404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.666609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.476 [2024-12-09 05:25:15.666649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.476 qpair failed and we were unable to recover it. 00:30:33.476 [2024-12-09 05:25:15.666862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.666902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.667115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.667155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.667394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.667437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.667601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.667642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.667896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.667936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.668142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.668183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.668384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.668425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.668628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.668667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.668883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.668923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.669129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.669170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.669330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.669371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.669587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.669628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.669829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.669870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.670098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.670138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.670344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.670386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.670601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.670643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.670853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.670893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.671085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.671125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.671365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.671408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.671692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.671733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.671945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.671986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.672268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.672311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.672527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.672781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.672821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.673058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.673099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.673248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.673290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.673489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.673529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.673738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.673779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.673922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.673962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.674180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.674230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.674450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.674490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.674679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.674720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.674949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.674989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.675122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.675162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.675392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.675570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.675611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.675813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.675852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.477 [2024-12-09 05:25:15.676069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.477 [2024-12-09 05:25:15.676109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.477 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.676375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.676417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.676610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.676652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.676929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.676970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.677195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.677512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.677552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.677798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.678038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.678079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.678287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.678327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.678523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.678563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.678760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.678800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.679086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.679126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.679329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.679373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.679532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.679573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.679860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.679901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.680097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.680137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.680417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.680458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.680721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.680761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.680972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.681019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.681315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.681357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.681585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.681625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.681906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.681945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.682226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.682268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.682475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.682516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.682745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.682785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.682930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.682971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.683111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.683151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.683370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.683412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.683627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.683668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.683794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.683834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.684044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.684084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.684365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.684406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.684608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.684650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.684901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.478 [2024-12-09 05:25:15.685053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.478 [2024-12-09 05:25:15.685094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.478 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.685287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.685329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.685593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.685633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.685849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.685890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.686158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.686197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.686446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.686486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.686702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.686743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.687023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.687062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.687271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.687314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.687620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.687831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.687871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.688105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.688147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.688301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.688342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.688575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.688615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.688807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.688847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.689041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.689081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.689268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.689310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.689595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.689635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.689784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.689966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.690006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.690266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.690308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.690456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.690495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.690760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.690800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.690994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.691035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.691231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.691285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.691418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.691458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.691714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.691754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.692052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.692255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.692297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.692581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.692621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.692831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.692872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.693070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.693111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.693388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.693429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.693646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.693686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.693989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.694236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.694276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.694433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.694473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.694689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.694729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.479 qpair failed and we were unable to recover it. 00:30:33.479 [2024-12-09 05:25:15.694928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.479 [2024-12-09 05:25:15.694968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.695174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.695226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.695519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.695559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.695834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.695875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.696110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.696150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.696368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.696409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.696691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.696731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.696935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.696975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.697122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.697161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.697455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.697497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.697757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.697797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.697939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.697978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.698190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.698241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.698390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.698431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.698715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.698755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.698908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.698948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.699235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.699283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.699431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.699668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.699709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.699918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.699958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.700149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.700189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.700490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.700804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.700844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.701072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.701113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.701371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.701649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.701849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.701895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.702089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.702129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.702258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.702300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.702582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.702621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.702899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.702939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.703133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.703173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.703484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.703840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.704164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.704204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.704511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.704552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.704861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.704901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.480 [2024-12-09 05:25:15.705173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.480 [2024-12-09 05:25:15.705234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.480 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.705494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.705534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.705817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.705857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.706121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.706163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.706434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.706475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.706733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.706773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.707074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.707114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.707373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.707415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.707611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.707651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.707937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.707978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.708240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.708287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.708570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.708610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.708825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.708865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.709146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.709186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.709419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.709460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.709615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.709895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.709936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.710240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.710487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.710527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.710742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.710782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.711014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.711054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.711340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.711381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.711640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.711680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.711897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.711938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.712238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.712285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.712445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.712485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.712618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.712658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.712955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.713099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.713139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.713456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.713504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.713699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.713738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.713995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.714035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.714298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.714339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.714572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.714775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.714815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.715026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.715067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.481 [2024-12-09 05:25:15.715258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.481 [2024-12-09 05:25:15.715299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.481 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.715581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.715622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.715814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.715855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.716015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.716055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.716272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.716332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.716616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.716658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.716891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.716931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.717197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.717244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.717472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.717511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.717748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.717788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.717993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.718033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.718333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.718489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.718529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.718815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.718855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.719116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.719161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.719327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.719369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.719598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.719639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.719898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.719938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.720191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.720355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.720398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.720718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.720760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.721057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.721097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.721296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.721338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.721529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.721569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.721763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.721802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.722084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.722340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.722381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.722579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.722619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.722829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.722870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.723166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.723216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.723376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.723415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.723709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.723749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.723903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.723944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.724200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.724270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.724475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.724515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.724709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.724749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.724954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.724995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.725288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.725613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.725655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.725939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.725979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.726191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.726241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.726440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.482 [2024-12-09 05:25:15.726487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.482 qpair failed and we were unable to recover it. 00:30:33.482 [2024-12-09 05:25:15.726624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.726798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.726838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.727106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.727149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.727376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.727614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.727657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.727934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.727975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.728239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.728285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.728566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.728606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.728745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.728785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.729064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.729103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.729256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.729298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.729498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.729539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.729825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.729865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.730071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.730111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.730389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.730431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.730593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.730633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.730830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.730870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.731082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.731123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.731397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.731438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.731721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.731761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.732063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.732115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.732343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.732386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.732581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.732622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.732844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.732885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.733092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.733131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.733355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.733611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.733652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.733862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.733902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.734164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.734204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.734473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.734680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.734874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.734920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.735124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.735164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.735417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.735500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.735779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.735839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.736084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.736134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.736462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.736517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.736798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.736850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.737026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.737074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.737244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.737299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.737598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.483 [2024-12-09 05:25:15.737650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.483 qpair failed and we were unable to recover it. 00:30:33.483 [2024-12-09 05:25:15.737959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.738008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.738159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.738223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.738519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.738576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.738788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.738842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.739095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.739156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.739382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.739432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.739664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.739722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.739978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.740028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.740312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.740368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.740600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.740649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.740821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.740876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.741163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.741231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.741480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.741531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.741828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.741888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.742112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.742161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.742457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.742511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.742764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.742827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.743066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.743124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.743372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.743431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.743701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.743918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.743978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.744235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.744288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.744643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.744917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.744972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.745144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.745194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.745426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.745480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.745788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.745838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.746025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.746076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.746272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.746328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.746490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.746545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.746876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.746943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.747118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.747168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.747401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.747457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.747754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.747805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.747963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.748018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.748171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.748233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.748528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.748757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.748818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.749097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.749149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.749388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.749442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.484 [2024-12-09 05:25:15.749625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.484 [2024-12-09 05:25:15.749677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.484 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.749897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.749956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.750153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.750224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.750465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.750715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.750774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.751063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.751121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.751377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.751433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.751649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.751710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.751960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.752016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.752252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.752303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.752548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.752600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.752814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.752871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.753031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.753084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.753321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.753374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.753553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.753613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.753822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.753871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.754147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.754197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.754422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.754504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.754795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.754853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.755026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.755072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.755335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.755383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.755562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.755611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.755761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.755814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.756017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.756060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.756295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.756344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.756518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.756560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.756821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.756862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.757073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.757130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.757386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.757429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.757699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.757899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.757940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.758228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.758275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.758434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.758474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.758679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.758719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.759055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.759096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.759382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.485 [2024-12-09 05:25:15.759424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.485 qpair failed and we were unable to recover it. 00:30:33.485 [2024-12-09 05:25:15.759586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.759628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.759819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.759867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.760104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.760309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.760352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.760585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.760630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.760783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.760824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.761023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.761065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.761227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.761268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.761468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.761509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.761707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.761748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.761963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.762003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.762263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.762305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.762587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.762628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.762826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.763021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.763196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.763246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.763386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.763427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.763641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.763681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.763886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.764148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.764192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.764367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.764410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.764695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.764736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.765007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.765049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.765281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.765470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.765511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.765781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.765822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.766134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.766175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.766433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.766474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.766678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.766718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.766917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.766965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.767261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.767602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.767653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.767894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.767943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.768155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.768204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.768442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.768491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.768661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.768717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.768870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.768918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.769140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.769188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.769429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.769472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.486 [2024-12-09 05:25:15.769630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.486 [2024-12-09 05:25:15.769679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.486 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.769906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.769955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.770179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.770244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.770539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.770582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.770866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.770914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.771127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.771175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.771398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.771441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.771644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.771687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.771839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.772138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.772179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.772500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.772708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.772749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.773035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.773076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.773300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.773342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.773552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.773591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.773802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.773843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.773971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.774011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.774268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.774309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.774524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.774565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.774758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.774798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.775079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.775120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.775293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.775490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.775530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.775793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.775839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.776131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.776172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.776429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.776470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.776626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.776667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.776865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.776904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.777099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.777140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.777295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.777336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.777623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.777664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.777857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.777897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.778108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.778147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.778300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.778342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.778485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.778526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.778813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.778852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.779110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.779150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.779428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.779471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.779668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.779708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.779949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.779990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.780244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.780292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.780517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.780556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.780776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.780816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.781042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.487 [2024-12-09 05:25:15.781081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.487 qpair failed and we were unable to recover it. 00:30:33.487 [2024-12-09 05:25:15.781338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.781380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.781527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.781567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.781778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.781818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.782016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.782057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.782263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.782304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.782562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.782602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.782794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.782834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.783036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.783077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.783288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.783329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.783522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.783561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.783796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.784010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.784050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.784247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.784288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.784497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.784537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.784729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.784768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.784895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.784934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.785066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.785105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.785320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.785362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.785582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.785622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.785901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.785941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.786339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.786416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.786672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.786718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.786933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.786973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.787170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.787225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.787537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.787578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.787804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.787845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.788106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.788146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.788432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.788474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.788629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.788669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.788955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.788995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.789197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.789248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.789459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.789500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.789700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.789740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.790029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.790227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.790269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.790497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.790537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.790672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.790712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.790996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.791036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.791175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.791228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.791378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.791418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.488 qpair failed and we were unable to recover it. 00:30:33.488 [2024-12-09 05:25:15.791677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.488 [2024-12-09 05:25:15.791717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.791999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.792039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.792294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.792336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.792465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.792505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.792724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.792765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.792999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.793287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.793619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.793659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.793937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.793978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.794102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.794142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.794285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.794327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.794636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.794678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.794951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.794992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.795253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.795295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.795452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.795493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.795769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.795810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.796033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.796073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.796219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.796260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.796471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.796512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.796718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.796758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.796907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.796951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.797167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.797219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.797435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.797476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.797663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.797857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.797898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.798106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.798146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.798302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.798343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.798557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.798596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.798831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.798872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.489 qpair failed and we were unable to recover it. 00:30:33.489 [2024-12-09 05:25:15.799078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.489 [2024-12-09 05:25:15.799117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.799310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.799352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.799499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.799539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.799799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.799839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.799980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.800019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.800287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.800524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.800564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.800722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.800761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.800987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.801246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.801288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.801534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.801745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.801785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.802034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.802259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.802300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.802499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.802539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.802834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.802874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.803065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.803105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.803365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.803405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.803623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.803882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.803921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.804184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.804238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.804464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.804504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.804699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.804739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.804935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.804975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.805174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.805225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.805484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.805525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.805732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.805772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.805987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.806027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.806306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.806347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.806545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.806585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.806784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.806824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.807020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.807059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.807276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.807319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.807541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.807581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.807838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.807878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.808095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.808135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.808402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.808442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.808715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.808927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.808967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.809229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.809270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.809538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.809578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.809785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.809825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.810019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.810058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.810363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.490 [2024-12-09 05:25:15.810405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.490 qpair failed and we were unable to recover it. 00:30:33.490 [2024-12-09 05:25:15.810632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.810673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.810889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.810935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.811129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.811168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.811371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.811413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.811618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.811658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.811807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.811847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.812070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.812110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.812341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.812381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.812574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.812614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.812752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.812792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.812939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.812978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.813244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.813583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.813623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.813761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.813802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.813953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.813993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.814205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.814260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.814426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.814466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.814728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.814768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.814963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.815004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.815264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.815305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.815509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.815549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.815782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.815822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.816050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.816089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.816281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.816322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.816519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.816559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.816806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.816846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.816998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.817039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.817239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.817280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.817427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.817473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.817720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.817761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.818004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.818044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.818237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.818277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.818485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.818526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.818805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.818844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.819059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.819098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.819245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.819287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.819413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.819453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.819667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.819706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.819841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.819881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.820084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.820417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.820458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.820679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.820719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.820922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.820963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.821104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.821143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.821301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.821343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.821538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.821578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.821703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.491 [2024-12-09 05:25:15.821974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.491 [2024-12-09 05:25:15.822015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.491 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.822311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.822353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.822562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.822601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.822746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.822785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.823093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.823133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.823373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.823415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.823555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.823594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.823793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.823834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.824104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.824144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.824375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.824417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.824564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.824604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.824804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.824844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.825134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.825173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.825384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.825425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.825629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.825670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.825792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.825831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.826090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.826130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.826402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.826444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.826645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.826685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.826835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.826875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.827001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.827259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.827300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.827512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.827558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.827756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.827796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.827950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.827989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.828222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.828263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.828588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.828870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.829053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.829092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.829252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.829293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.829569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.829764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.830010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.830050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.830242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.830283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.830492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.830533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.830737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.830777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.831007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.831047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.831243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.831285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.831427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.831466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.831727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.831768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.832026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.832066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.832278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.832329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.832599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.832749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.832789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.833001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.833041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.833255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.833502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.833541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.492 qpair failed and we were unable to recover it. 00:30:33.492 [2024-12-09 05:25:15.833666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.492 [2024-12-09 05:25:15.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.833909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.833950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.834154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.834200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.834495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.834536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.834672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.834712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.834838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.834878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.835080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.835120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.835345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.835387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.835684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.835724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.835985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.836025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.836166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.836217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.836417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.836457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.836666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.836705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.836910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.836950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.837235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.837277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.837496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.837537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.837702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.837742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.837962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.838002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.838286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.838328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.838535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.838574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.838712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.838752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.838947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.838988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.839232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.839274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.839533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.839573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.839783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.839824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.839959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.840231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.840272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.840511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.840552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.840875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.840915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.841175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.841234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.841441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.841481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.841749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.841789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.841981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.842021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.842305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.842346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.842492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.842532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.842687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.842727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.842876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.842916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.843146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.843186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.843428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.843469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.843618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.843657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.843801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.843841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.844117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.844157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.493 [2024-12-09 05:25:15.844455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.493 [2024-12-09 05:25:15.844496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.493 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.844749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.844790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.844930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.844970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.845183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.845235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.845448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.845488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.845690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.845731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.845938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.845978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.846286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.846327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.846608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.846648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.846839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.846878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.847019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.847061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.847363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.847405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.847640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.847680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.847838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.847877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.848116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.848156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.848304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.848346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.848550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.848589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.848740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.848780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.848932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.848973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.849164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.849203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.849477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.849517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.849724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.849764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.850046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.850086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.850301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.850343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.850593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.850634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.850827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.851148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.851452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.851693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.851735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.851863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.851903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.852118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.852158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.852376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.852417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.852698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.852738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.852971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.853011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.853221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.853262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.853467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.853506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.853724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.853763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.854023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.854063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.854404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.854446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.854589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.854628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.854823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.854863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.855121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.855161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.855469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.855509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.855645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.855685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.855902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.855943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.856217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.856258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.856387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.856427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.494 [2024-12-09 05:25:15.856685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.494 [2024-12-09 05:25:15.856725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.494 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.856859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.856898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.857090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.857130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.857295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.857337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.857596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.857636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.857850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.857890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.858149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.858190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.858431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.858471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.858899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.858939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.859130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.859170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.859390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.859430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.859654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.859695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.859834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.859873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.860082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.860122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.860332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.860374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.860579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.860618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.860758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.860798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.861009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.861049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.861330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.861371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.861514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.861553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.861715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.861755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.861950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.861991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.862216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.862258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.862412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.862451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.862644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.862684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.862984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.863024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.863158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.863198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.863430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.863471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.863756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.863796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.864014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.864053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.864199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.864251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.864444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.864485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.864629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.864668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.864862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.864902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.865118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.865165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.865441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.865495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.865709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.865749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.866043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.866195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.866260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.866406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.866453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.866718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.866766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.866968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.867016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.867258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.867309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.495 qpair failed and we were unable to recover it. 00:30:33.495 [2024-12-09 05:25:15.867524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.495 [2024-12-09 05:25:15.867572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.867730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.867772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.867969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.868010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.868140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.868180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.868380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.868420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.868661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.868981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.869022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.869164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.869204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.869357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.869397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.869655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.869696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.869837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.869877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.870177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.870225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.870360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.870619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.870659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.870871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.870911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.871125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.871166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.871364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.871622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.871662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.871921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.871973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.872181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.872514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.872556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.872813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.872852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.873109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.873149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.873293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.873335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.873545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.873585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.873784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.873824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.873958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.873997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.874218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.874259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.874464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.874504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.874697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.874736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.874877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.874917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.875071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.875111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.875342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.875384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.875605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.875645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.875854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.875894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.876086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.876125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.876264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.876305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.876452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.876491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.876693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.876732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.876954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.876995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.877228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.877270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.877483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.877523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.877732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.877970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.878011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.878173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.878219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.878367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.878407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.878655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.878696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.496 [2024-12-09 05:25:15.878961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.496 [2024-12-09 05:25:15.879001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.496 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.879195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.879249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.879395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.879436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.879717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.879756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.880007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.880201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.880259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.880456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.880496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.880753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.880793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.881002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.881042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.881254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.881296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.881601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.881641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.881832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.881872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.882010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.882056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.882217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.882259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.882543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.882583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.882825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.882866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.883079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.883118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.883421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.883631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.883671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.883872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.883912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.884054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.884093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.884306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.884348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.884550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.884590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.884803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.884843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.885032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.885071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.885333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.885375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.885574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.885765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.885805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.885998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.886317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.886358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.886593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.886751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.886791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.886948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.886988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.887180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.887227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.887384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.887423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.887618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.887658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.887850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.887889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.888014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.888053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.888250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.888291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.888421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.888465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.888676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.497 [2024-12-09 05:25:15.888716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.497 qpair failed and we were unable to recover it. 00:30:33.497 [2024-12-09 05:25:15.888858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.888899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.889104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.889143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.889305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.889346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.889494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.889534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.889828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.889867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.890077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.890116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.890372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.890414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.890572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.890612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.890795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.890944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.890985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.891176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.891225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.891489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.891782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.891823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.892087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.892128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.892308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.892349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.892537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.892756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.892795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.893053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.893093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.893314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.893614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.893653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.893945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.893985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.894219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.894260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.894556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.894596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.894806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.895110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.895151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.895368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.895415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.895709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.895750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.896027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.896067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.896351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.896392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.896528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.896567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.896761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.896801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.896995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.897034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.897269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.897311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.897511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.897550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.897714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.897754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.897952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.897991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.898133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.898172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.898400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.898440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.898662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.898701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.898902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.898942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.899202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.899250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.899536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.899576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.899725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.899766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.899904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.899943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.498 [2024-12-09 05:25:15.900092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.498 [2024-12-09 05:25:15.900131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.498 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.900335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.900377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.900643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.900682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.900898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.900937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.901162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.901413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.901453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.901689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.901729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.901918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.901958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.902185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.902418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.902459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.902654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.902694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.902886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.902926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.903152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.903193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.903415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.903455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.903654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.903695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.903898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.903938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.904101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.904276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.904467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.904667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.904849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.904986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.905026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.905238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.905279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.905529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.905723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.906007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.906047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.906273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.906313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.906517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.906556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.906815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.906855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.907159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.907198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.907421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.907603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.907900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.907939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.908157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.908197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.908355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.908396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.908677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.908716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.908885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.908925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.909071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.909256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.909298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.909434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.909475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.909704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.909745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.909890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.909930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.910135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.910176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.910378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.910418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.910549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.910588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.499 qpair failed and we were unable to recover it. 00:30:33.499 [2024-12-09 05:25:15.910868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.499 [2024-12-09 05:25:15.910908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.911107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.911294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.911335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.911477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.911517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.911782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.911828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.911962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.912002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.912194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.912246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.912534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.912574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.912785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.912824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.912964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.913003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.913231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.913273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.913564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.913604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.913865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.913905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.914041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.914081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.914231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.914272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.914482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.914522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.914666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.914705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.914984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.915024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.915185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.915236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.915516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.915556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.915777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.915817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.916013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.916053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.916256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.916297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.916492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.916533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.916681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.916720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.917044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.917084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.917316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.917527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.917568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.917764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.917806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.918005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.918045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.918259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.918301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.918503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.918549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.918832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.918872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.919136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.919176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.919340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.919381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.919621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.919661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.919905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.920038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.920078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.920225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.920266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.920972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.921012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.921169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.500 [2024-12-09 05:25:15.921240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.500 qpair failed and we were unable to recover it. 00:30:33.500 [2024-12-09 05:25:15.921435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.921475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.921695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.921734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.921891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.921932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.922128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.922168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.922368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.922573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.922613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.922753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.922793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.923115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.923155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.923317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.923359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.501 [2024-12-09 05:25:15.923667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.501 [2024-12-09 05:25:15.923707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.501 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.923975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.924016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.924228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.924271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.924494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.924533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.924684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.924724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.924929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.924969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.925181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.925230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.925493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.925534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.925760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.925799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.925934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.925974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.926137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.926177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.926406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.926593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.926633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.926878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.926918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.927057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.927097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.927234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.927276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.927418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.927458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.927718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.927758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.927942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.928071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.928112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.773 [2024-12-09 05:25:15.928265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.773 [2024-12-09 05:25:15.928307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.773 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.928448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.928488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.928719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.928977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.929018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.929175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.929223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.929368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.929407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.929541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.929581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.929777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.929817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.930077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.930117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.930334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.930375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.930581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.930621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.930813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.930853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.931082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.931122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.931252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.931294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.931441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.931481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.931620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.931659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.931804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.931844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.932058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.932098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.932307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.932349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.932646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.932839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.932880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.933016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.933055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.933179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.933230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.933381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.933421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.933656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.933892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.933932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.934171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.934219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.934456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.934501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.934625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.934665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.934822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.934862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.935125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.935165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.935326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.935367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.935493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.935533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.935682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.935722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.935961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.936259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.936301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.936506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.936546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.936698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.774 [2024-12-09 05:25:15.936738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.774 qpair failed and we were unable to recover it. 00:30:33.774 [2024-12-09 05:25:15.936874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.936914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.937110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.937149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.937442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.937483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.937701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.937741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.937879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.937919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.938061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.938101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.938360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.938402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.938551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.938590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.938721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.938761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.938884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.938924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.939204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.939252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.939391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.939431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.939581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.939622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.939924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.939964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.940171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.940238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.940387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.940427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.940627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.940673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.940899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.940939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.941133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.941178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.941399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.941439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.941609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.941648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.941847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.941888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.942121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.942161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.942373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.942415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.942613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.942654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.942785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.942824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.942976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.943170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.943234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.943427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.943467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.943747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.943787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.943942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.943982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.944124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.944165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.944391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.944432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.944710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.944751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.944941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.944981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.945140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.945179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.945334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.945375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.775 [2024-12-09 05:25:15.945581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.775 [2024-12-09 05:25:15.945621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.775 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.945918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.945957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.946192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.946245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.946534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.946575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.946779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.946819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.947106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.947146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.947437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.947484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.947750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.947789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.947992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.948032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.948247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.948290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.948592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.948632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.948787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.948827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.949026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.949066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.949296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.949336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.949552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.949592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.949747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.949787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.950045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.950085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.950289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.950330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.950487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.950527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.950731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.950771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.951083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.951123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.951281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.951322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.951545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.951585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.951775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.951815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.952023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.952063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.952277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.952318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.952537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.952577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.952731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.952923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.952964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.953166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.953216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.953430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.953471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.953629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.953669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.953862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.953902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.954096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.954136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.954289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.954331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.954470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.954510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.954638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.954678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.954905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.954945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.955091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.776 [2024-12-09 05:25:15.955130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.776 qpair failed and we were unable to recover it. 00:30:33.776 [2024-12-09 05:25:15.955291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.955332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.955474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.955514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.955777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.955816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.955962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.956003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.956145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.956184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.956408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.956448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.956707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.956748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.956943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.956982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.957136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.957177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.957516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.957557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.957759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.957799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.957991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.958031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.958245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.958310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.958535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.958575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.958786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.958827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.959031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.959072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.959202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.959262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.959528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.959568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.959709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.959750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.959955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.959995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.960228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.960487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.960526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.960671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.960712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.960916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.960956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.961181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.961228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.961371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.961412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.961609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.961649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.961904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.961944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.962161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.962201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.962472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.962512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.962817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.962856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.963049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.963089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.963351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.963393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.963676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.963716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.963984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.964178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.964236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.964532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.964572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.964725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.777 [2024-12-09 05:25:15.964764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.777 qpair failed and we were unable to recover it. 00:30:33.777 [2024-12-09 05:25:15.964959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.965218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.965259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.965562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.965603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.965732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.965771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.965984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.966023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.966254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.966294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.966503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.966543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.966735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.966775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.967007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.967047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.967252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.967295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.967505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.967545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.967837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.967880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.968020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.968061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.968220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.968261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.968401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.968441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.968649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.968837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.968877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.969126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.969165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.969384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.969427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.969628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.969667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.969855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.969896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.970154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.970194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.970385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.970614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.970654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.970906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.970968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.971106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.971145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.971319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.971362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.971624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.971663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.971867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.971907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.972056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.972096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.972239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.972280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.972494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.972757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.972797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.778 qpair failed and we were unable to recover it. 00:30:33.778 [2024-12-09 05:25:15.973097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.778 [2024-12-09 05:25:15.973137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.973349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.973390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.973588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.973628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.973836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.973875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.974133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.974173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.974393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.974434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.974645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.974685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.974886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.974926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.975085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.975126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.975274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.975315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.975577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.975618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.975812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.975852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.976131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.976170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.976407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.976448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.976587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.976626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.976879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.977081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.977122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.977271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.977312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.977598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.977638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.977856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.977896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.978042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.978336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.978532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.978572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.978774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.978813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.979091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.979131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.979361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.979403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.979556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.979741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.979781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.979972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.980013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.980221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.980262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.980469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.980509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.980649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.980689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.980873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.980954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.981192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.981267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.981541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.981590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.981826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.981877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.982080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.982448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.779 [2024-12-09 05:25:15.982499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.779 qpair failed and we were unable to recover it. 00:30:33.779 [2024-12-09 05:25:15.982771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.983032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.983086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.983331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.983386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.983659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.983708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.984050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.984355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.984408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.984578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.984792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.984851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.985081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.985128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.985435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.985486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.985636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.985680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.985885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.985924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.986063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.986103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.986303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.986344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.986556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.986596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.986823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.986865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.987078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.987117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.987252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.987294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.987490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.987530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.987808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.987848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.988055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.988096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.988332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.988374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.988583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.988622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.988779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.988965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.989005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.989148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.989188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.989418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.989458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.989594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.989634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.989861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.989901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.990159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.990199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.990424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.990466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.990670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.990710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.990834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.990875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.991069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.991109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.991319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.991367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.991519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.991561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.991698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.991737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.780 [2024-12-09 05:25:15.991930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.780 [2024-12-09 05:25:15.991969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.780 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.992186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.992252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.992483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.992715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.992756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.992910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.992950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.993088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.993127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.993397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.993439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.993633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.993673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.993841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.994106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.994146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.994412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.994454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.994707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.994786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.995041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.995095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.995337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.995382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.995533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.995574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.995782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.995823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.996079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.996119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.996436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.996480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.996612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.996653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.996788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.996827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.997020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.997060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.997333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.997376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.997710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.997913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.997953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.998196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.998260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.998469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.998509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.998721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.998761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.998954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.998995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.999135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.999175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.999401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.999443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.999725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:15.999767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:15.999972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.000013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.000242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.000285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.000480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.000521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.000781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.000821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.001041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.001085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.001376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.001417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.001622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.001873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.781 [2024-12-09 05:25:16.001913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.781 qpair failed and we were unable to recover it. 00:30:33.781 [2024-12-09 05:25:16.002164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.002204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.002376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.002415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.002605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.002644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.002790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.002830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.002968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.003007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.003140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.003180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.003405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.003446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.003653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.003693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.003898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.004133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.004352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.004392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.004525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.004565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.004760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.004806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.005022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.005225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.005267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.005469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.005509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.005637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.005677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.005872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.005912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.006036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.006076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.006311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.006353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.006481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.006521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.006711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.006751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.006944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.007217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.007259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.007468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.007507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.007711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.007750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.007979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.008020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.008219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.008261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.008522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.008562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.008769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.008809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.009016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.009057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.009245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.009386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.009426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.782 [2024-12-09 05:25:16.009627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.782 [2024-12-09 05:25:16.009667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.782 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.009862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.009901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.010092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.010316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.010358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.010565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.010604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.010835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.010875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.011004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.011050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.011262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.011303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.011539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.011580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.011779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.011818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.012019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.012059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.012256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.012297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.012502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.012542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.012746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.012788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.012981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.013116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.013155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.013446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.013487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.013686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.013726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.013962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.014002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.014195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.014249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.014515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.014556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.014795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.014949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.014989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.015202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.015456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.015495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.015762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.015802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.015945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.015985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.016191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.016240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.016435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.016475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.016601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.016892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.016932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.017084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.017124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.017268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.017309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.017472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.017753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.017902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.017944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.018134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.018355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.018395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.018626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.018667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.018793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.018833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.019034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.783 [2024-12-09 05:25:16.019073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.783 qpair failed and we were unable to recover it. 00:30:33.783 [2024-12-09 05:25:16.019290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.019332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.019567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.019608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.019834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.020025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.020066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.020189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.020238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.020448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.020488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.020673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.020752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.020991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.021037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.021304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.021348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.021561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.021603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.021741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.021782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.021984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.022024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.022251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.022294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.022494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.022538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.022741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.022781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.022982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.023022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.023169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.023219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.023487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.023528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.023725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.023765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.023916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.023955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.024160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.024201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.024407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.024451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.024659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.024700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.024848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.024889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.025023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.025063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.025188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.025237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.025370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.025546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.025586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.025746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.025811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.026022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.026073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.026299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.026350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.026580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.026636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.026858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.026907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.027227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.027279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.027430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.027478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.027622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.027672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.027913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.027963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.028253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.028302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.028532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.028574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.028887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.028928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.029148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.784 [2024-12-09 05:25:16.029188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.784 qpair failed and we were unable to recover it. 00:30:33.784 [2024-12-09 05:25:16.029476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.029516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.029800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.029840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.029989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.030029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.030161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.030201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.030423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.030463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.030660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.030985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.031026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.031160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.031200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.031472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.031513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.031793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.031834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.032052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.032092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.032319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.032368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.032518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.032559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.032757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.032797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.033058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.033098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.033380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.033422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.033557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.033611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.033823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.033863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.033997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.034037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.034253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.034298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.034460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.034502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.034719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.034761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.034971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.035011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.035141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.035181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.035439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.035484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.035681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.035721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.035982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.036022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.036231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.036272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.036466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.036506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.036727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.036767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.036959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.037000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.037260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.037607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.037764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.037803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.038009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.038049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.038178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.038227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.038488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.038527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.038742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.038783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.039029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.039287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.039328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.039523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.039671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.785 [2024-12-09 05:25:16.039711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.785 qpair failed and we were unable to recover it. 00:30:33.785 [2024-12-09 05:25:16.039932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.039972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.040109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.040149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.040304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.040345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.040537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.040577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.040814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.040855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.041047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.041086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.041369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.041413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.041630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.041669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.041905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.041945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.042135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.042175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.042388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.042530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.042570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.042751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.043007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.043047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.043243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.043285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.043421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.043461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.043682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.043722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.043978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.044018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.044251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.044293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.044507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.044547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.044744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.044785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.044971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.045162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.045202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.045492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.045533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.045737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.045776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.045914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.045954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.046176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.046246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.046508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.046548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.046740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.046780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.046928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.046968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.047178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.047229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.047447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.047488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.047682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.047721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.047922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.048058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.048098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.048300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.048341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.048475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.048515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.048669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.048709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.049005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.049232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.786 qpair failed and we were unable to recover it. 00:30:33.786 [2024-12-09 05:25:16.049467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.786 [2024-12-09 05:25:16.049506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.049642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.049681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.049959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.049997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.050266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.050461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.050498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.050699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.050737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.050932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.050970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.051173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.051219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.051409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.051448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.051667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.051705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.051911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.051949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.052233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.052272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.052476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.052515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.052775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.052814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.053021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.053059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.053201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.053248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.053472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.053510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.053662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.053701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.053895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.053939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.054198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.054245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.054385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.054423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.054629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.054668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.054883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.054921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.055121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.055160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.055306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.055345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.055626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.055666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.055871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.055910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.056168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.056219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.056480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.056519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.056723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.056761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.056956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.056999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.057128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.057167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.057398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.057579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.057620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.057773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.057818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.058101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.058141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.058410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.058452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.058734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.058774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.058984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.059024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.059228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.059270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.059486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.059527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.059749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.059789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.059927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.059967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.060108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.060148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.060394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.787 [2024-12-09 05:25:16.060435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.787 qpair failed and we were unable to recover it. 00:30:33.787 [2024-12-09 05:25:16.060667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.060713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.060839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.060878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.061015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.061055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.061200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.061252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.061457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.061496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.061692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.061733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.061873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.061913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.062154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.062454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.062498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.062689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.062730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.062925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.062964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.063104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.063144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.063301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.063342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.063536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.063576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.063821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.064105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.064144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.064294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.064335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.064474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.064513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.064770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.064810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.065020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.065060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.065332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.065476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.065516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.065724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.065984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.066023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.066256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.066297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.066557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.066597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.066800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.066839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.066995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.067035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.067263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.067305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.067531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.067571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.067777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.067817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.067957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.068133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.068172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.068483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.068524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.068791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.068832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.069036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.069075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.069239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.069281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.069477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.069518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.069724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.069764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.070013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.070196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.788 [2024-12-09 05:25:16.070462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.788 [2024-12-09 05:25:16.070502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.788 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.070715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.070755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.070889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.070929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.071218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.071260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.071518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.071558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.071695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.071736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.071925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.071967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.072170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.072216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.072416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.072456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.072689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.072728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.072877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.072916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.073131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.073170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.073395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.073436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.073590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.073630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.073861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.073901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.074128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.074169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.074373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.074413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.074716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.074757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.074911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.074952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.075090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.075130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.075349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.075390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.075578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.075619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.075813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.075853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.076068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.076108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.076355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.076561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.076601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.076862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.076902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.077038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.077083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.077293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.077335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.077484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.077524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.077656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.077696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.077980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.078307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.078349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.078567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.078608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.078858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.079098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.079139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.079431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.789 [2024-12-09 05:25:16.079472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.789 qpair failed and we were unable to recover it. 00:30:33.789 [2024-12-09 05:25:16.079597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.079636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.079826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.079866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.080122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.080315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.080357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.080625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.080666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.080971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.081011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.081144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.081184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.081460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.081501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.081697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.081736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.081996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.082036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.082177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.082249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.082409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.082449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.082623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.082838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.082878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.083153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.083193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.083415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.083455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.083640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.083830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.084151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.084190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.084414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.084454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.084684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.084725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.084880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.084920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.085114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.085154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.085369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.085411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.085607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.085647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.085854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.086089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.086129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.086404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.086445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.086640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.086680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.086879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.086920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.087139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.087179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.087466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.087507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.087650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.087690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.087995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.088035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.088241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.088282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.088485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.088526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.088754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.088794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.089080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.790 qpair failed and we were unable to recover it. 00:30:33.790 [2024-12-09 05:25:16.089301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.790 [2024-12-09 05:25:16.089343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.089510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.089752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.089792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.089936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.089977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.090182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.090233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.090435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.090475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.090601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.090648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.090784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.090824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.091085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.091125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.091335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.091375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.091600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.091640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.091831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.091871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.092094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.092134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.092357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.092399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.092594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.092633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.092859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.092898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.093037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.093077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.093276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.093318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.093531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.093850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.093890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.094031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.094071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.094277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.094318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.094451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.094491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.094697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.094736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.094895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.094934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.095090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.095131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.095282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.095322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.095513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.095553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.095703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.095743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.791 qpair failed and we were unable to recover it. 00:30:33.791 [2024-12-09 05:25:16.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.791 [2024-12-09 05:25:16.096042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.096239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.096280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.096476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.096517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.096741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.096781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.096930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.096970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.097235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.097278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.097434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.097474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.097705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.097745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.097939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.097979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.098120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.098159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.098391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.098432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.098632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.098672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.098816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.098856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.099160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.099199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.099416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.099455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.099731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.099987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.100027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.100183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.100232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.100487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.100565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.100796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.100852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.101061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.101256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.101612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.101653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.101911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.101960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.102278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.102440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.102481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.102635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.102675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.102890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.102931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.103159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.103199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.103412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.103453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.103713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.103910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.104115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.104155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.104367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.104575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.104623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.104829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.104877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.105080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.105129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.105364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.105408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.105693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.105733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.792 qpair failed and we were unable to recover it. 00:30:33.792 [2024-12-09 05:25:16.105993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.792 [2024-12-09 05:25:16.106034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.106169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.106220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.106350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.106390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.106651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.106698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.106902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.107084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.107125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.107357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.107400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.107551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.107591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.107719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.107759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.107901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.107942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.108155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.108196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.108439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.108480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.108712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.108751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.108942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.108991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.109293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.109338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.109540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.109872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.110070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.110111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.110314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.110356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.110552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.110599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.110727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.110768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.110988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.111028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.111242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.111289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.111511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.111553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.111715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.111755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.111904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.111944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.112233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.112274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.112435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.112475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.112732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.112773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.113004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.113229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.113272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.113554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.113595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.113753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.113805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.114029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.114077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.114229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.114284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.114428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.114468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.114674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.114714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.114982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.115025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.115286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.115327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.115482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.115737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.115778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.115981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.116021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.116176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.793 [2024-12-09 05:25:16.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.793 qpair failed and we were unable to recover it. 00:30:33.793 [2024-12-09 05:25:16.116369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.116412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.116695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.116736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.116881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.116922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.117072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.117113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.117269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.117318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.117552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.117592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.117805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.117846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.118001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.118041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.118245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.118489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.118529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.118732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.118773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.118902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.118942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.119169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.119231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.119377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.119418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.119612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.119653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.119894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.119938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.120198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.120261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.120414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.120454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.120773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.121033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.121077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.121420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.121614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.121886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.122089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.122137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.122302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.122345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.122487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.122527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.122779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.122819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.122977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.123017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.123242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.123284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.123418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.123458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.123772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.123813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.124023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.124062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.124386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.124675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.124716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.124946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.124985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.125193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.125249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.125530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.125569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.125722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.125762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.125974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.126014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.126284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.794 qpair failed and we were unable to recover it. 00:30:33.794 [2024-12-09 05:25:16.126491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.794 [2024-12-09 05:25:16.126532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.126836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.126878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.127108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.127148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.127409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.127452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.127614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.127655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.127909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.128138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.128178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.128435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.128476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.128612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.128659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.128915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.128956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.129168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.129220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.129480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.129688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.129728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.129881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.129921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.130083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.130124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.130390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.130431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.130712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.130759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.130963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.131004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.131239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.131290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.131552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.131593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.131857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.131898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.132047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.132087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.132300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.132341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.132553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.132593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.132735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.132775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.133017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.133058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.133380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.133609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.133649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.133933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.134098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.134138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.134426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.134468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.134694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.134734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.134876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.134916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.135143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.135182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.135411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.795 [2024-12-09 05:25:16.135452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.795 qpair failed and we were unable to recover it. 00:30:33.795 [2024-12-09 05:25:16.135657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.135697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.136007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.136048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.136331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.136372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.136583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.136623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.136816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.136856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.137005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.137045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.137247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.137289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.137498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.137538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.137827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.137866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.138076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.138116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.138269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.138311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.138509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.138548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.138705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.138745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.139007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.139048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.139307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.139349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.139501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.139541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.139802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.139843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.140055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.140095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.140360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.140402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.140607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.140786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.140826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.141062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.141108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.141321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.141364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.141567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.141607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.141801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.141841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.141994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.142035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.142172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.142220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.142364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.142405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.142677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.142873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.142912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.143107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.143147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.143379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.143422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.143674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.143714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.143951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.143991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.144195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.144259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.144434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.144475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.144718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.144924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.144963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.145122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.145162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.145403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.145446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.145570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.145610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.145816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.145856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.146067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.146107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.796 [2024-12-09 05:25:16.146390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.796 [2024-12-09 05:25:16.146432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.796 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.146720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.146760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.147009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.147188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.147248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.147470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.147511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.147804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.147983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.148023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.148171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.148218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.148388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.148430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.148639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.148678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.148891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.148931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.149159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.149199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.149489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.149530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.149686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.149726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.149929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.149969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.150175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.150227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.150378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.150418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.150623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.150663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.150936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.150982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.151186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.151447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.151488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.151698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.151738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.151940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.151980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.152118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.152157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.152393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.152437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.152642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.152681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.152955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.153165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.153206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.153500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.153540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.153801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.153841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.153980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.154020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.154156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.154195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.154414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.154455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.154614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.154655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.154915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.154955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.155181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.155231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.155438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.155478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.155691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.155731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.155931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.155971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.156187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.156253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.156447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.156487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.156770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.156810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.157044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.157086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.157296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.157338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.157543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.797 [2024-12-09 05:25:16.157718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.797 [2024-12-09 05:25:16.157759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.797 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.157965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.158004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.158250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.158456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.158496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.158780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.158820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.159028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.159068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.159345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.159671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.159711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.159904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.159944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.160088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.160128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.160419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.160461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.160746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.160786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.160946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.160986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.161144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.161191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.161401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.161442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.161588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.161628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.161823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.161864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.162057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.162097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.162309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.162596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.162804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.162844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.163016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.163220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.163260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.163399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.163440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.163651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.163691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.163950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.163989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.164294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.164338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.164567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.164608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.164802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.164842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.165034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.165073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.165339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.165380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.165638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.165678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.165945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.165985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.166128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.166168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.166389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.166430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.166596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.166636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.166862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.166901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.167161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.167201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.167427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.167468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.167666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.167706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.167904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.167944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.168079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.168119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.168301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.168493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.168533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.168741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.168781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.169065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.169106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.798 [2024-12-09 05:25:16.169337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.798 [2024-12-09 05:25:16.169378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.798 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.169509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.169549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.169705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.169745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.169962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.170222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.170497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.170537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.170685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.170726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.170854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.170900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.171181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.171228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.171486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.171526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.171738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.171778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.172036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.172075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.172225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.172274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.172410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.172450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.172647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.172687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.172873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.172913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.173153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.173193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.173420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.173461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.173614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.173654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.173924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.173964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.174174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.174225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.174385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.174426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.174587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.174626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.174829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.174869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.175077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.175117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.175419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.175461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.175746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.175787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.175927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.175968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.176121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.176161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.176391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.176441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.176726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.176774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.176997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.177038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.177314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.177355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.177617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.177657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.177800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.177840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.178036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.178076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.178223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.178266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.178406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.178446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.178639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.178678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.178936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.799 [2024-12-09 05:25:16.178977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.799 qpair failed and we were unable to recover it. 00:30:33.799 [2024-12-09 05:25:16.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.179140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.179349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.179391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.179649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.179691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.179834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.179873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.180017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.180057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.180312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.180521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.180561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.180823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.180869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.181071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.181111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.181345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.181386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.181654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.181694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.181925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.181964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.182238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.182280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.182551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.182591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.182898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.182937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.183089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.183129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.183350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.183626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.183666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.183870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.183910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.184102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.184142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.184418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.184460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.184699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.184740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.184933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.184975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.185168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.185240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.185437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.185477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.185622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.185663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.185872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.185912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.186170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.186222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.186367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.186407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.186563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.186603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.186729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.186769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.186977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.187230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.187272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.187555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.187595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.187751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.187792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.187933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.187974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.188131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.188171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.188454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.188496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.188722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.188763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.189044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.189084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.189293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.189335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.189532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.189668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.189708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.189851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.189892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.190084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.190283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.190324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.190536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.800 [2024-12-09 05:25:16.190576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.800 qpair failed and we were unable to recover it. 00:30:33.800 [2024-12-09 05:25:16.190836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.190882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.191097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.191137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.191338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.191380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.191526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.191567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.191849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.191889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.192036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.192076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.192337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.192378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.192518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.192558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.192781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.192821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.193031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.193071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.193236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.193278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.193409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.193448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.193660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.193700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.193988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.194028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.194246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.194288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.194525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.194565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.194791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.194832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.195161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.195200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.195383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.195424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.195628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.195669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.195800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.195839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.195982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.196022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.196241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.196530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.196677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.196717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.196975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.197016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.197164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.197204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.197436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.197477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.197672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.197713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.197930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.197970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.198166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.198206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.198447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.198487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.198645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.198685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.198896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.198936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.199138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.199397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.199439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.199717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.199757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.199980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.200019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.200278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.200320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.200567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.200607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.200835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.200881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.201089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.201129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.201422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.201463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.201656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.201697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.201918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.201959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.801 [2024-12-09 05:25:16.202088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.801 [2024-12-09 05:25:16.202127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.801 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.202335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.202376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.202610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.202650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.202789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.202828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.203022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.203062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.203344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.203387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.203598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.203638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.203831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.203871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.204021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.204061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.204373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.204415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.204559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.204598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.204810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.204850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.205053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.205092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.205309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.205349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.205542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.205581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.205791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.205830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.206048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.206088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.206250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.206293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.206506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.206546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.206690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.206729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.206994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.207034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.207200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.207261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.207425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.207466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.207671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.207710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.207918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.207957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.208105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.208146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.208448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.208490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.208694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.208734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.209042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.209081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.209235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.209581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.209620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.209771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.209811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.210069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.210109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.210255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.210296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.210436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.210475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.210696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.210742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.210938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.210978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.211232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.211280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.211493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.211661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.211701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.211895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.211934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.212195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.212246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.212386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.212682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.212722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.212925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.212965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.213176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.213226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.213362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.213401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.213542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.213583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.213738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.213779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.213971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.214204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.214476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.214517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.214751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.214791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.214933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.214973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.215171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.215226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.215378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.215419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.215629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.215669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.802 [2024-12-09 05:25:16.215871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.802 [2024-12-09 05:25:16.215911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.802 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.216108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.216149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.216376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.216418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.216625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.216665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.216850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.217063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.217104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.217300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.217349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.217569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.217609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.217800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.217840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.218045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.218085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.218316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.218506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.218546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.218687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.218727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.218939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.218978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.219248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.219291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.219629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.219673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.219878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.219918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.220129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.220168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.220533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.220702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.220746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.220892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.220934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.221066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.221108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.221305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.221347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.221529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.221791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.221832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.221957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.222199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.222252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.222446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.222487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.222609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.222648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.222872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.222912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.223121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.223164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.223332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.223374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.223599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9ef20 is same with the state(6) to be set 00:30:33.803 [2024-12-09 05:25:16.223839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.223919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.224071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.224116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.224361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.224404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.224546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.224589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.224749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.224791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.224994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.225036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.225197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.225250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.225444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.225483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.225630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.225672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.225871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.225914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.226072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.226112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.226274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.226319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.226542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.226582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.226729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.226778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.227013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.227069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.227231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.227275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.227417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.227458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.227675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.227715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.227849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.227889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.228192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:33.803 [2024-12-09 05:25:16.228356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.803 [2024-12-09 05:25:16.228397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:33.803 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.228591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.228632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.228831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.228871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.229053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.229265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.229310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.229468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.229508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.229660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.229707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.229846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.229886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.230077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.230117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.230397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.230438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.230638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.230680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.230888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.230927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.231055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.231095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.231309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.231351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.231476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.231516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.231674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.231714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.232047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.232250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.232426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.232823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.232976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.233024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.233186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.233242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.233521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.233561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.233781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.233821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.233969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.234014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.234233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.234275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.234471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.234511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.234659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.234699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.234913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.234953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.235147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.235187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.235363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.235406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.235614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.235654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.235844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.079 [2024-12-09 05:25:16.235890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.079 qpair failed and we were unable to recover it. 00:30:34.079 [2024-12-09 05:25:16.236039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.236079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.236279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.236320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.236552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.236593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.236794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.236835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.237027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.237067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.237286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.237328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.237528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.237569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.237720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.237772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.238043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.238085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.238234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.238276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.238448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.238600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.238640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.238884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.239092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.239132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.239329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.239370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.239580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.239814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.239854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.239980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.240222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.240263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.240453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.240493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.240745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.240785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.241043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.241249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.241291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.241461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.241504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.241767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.241806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.241934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.241974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.242112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.242159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.242309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.242350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.242576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.242747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.242912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.242951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.243103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.243142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.243490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.243692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.243731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.243946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.243985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.244175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.244245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.244473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.244514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.244724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.080 [2024-12-09 05:25:16.244763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.080 qpair failed and we were unable to recover it. 00:30:34.080 [2024-12-09 05:25:16.244918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.244958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.245149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.245190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.245401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.245464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.245696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.245755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.245975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.246033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.246249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.246309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.246542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.246841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.246891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.247224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.247268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.247482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.247522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.247694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.247734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.248005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.248045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.248249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.248290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.248430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.248470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.248728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.248768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.248898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.248937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.249168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.249226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.249432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.249477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.249688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.249728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.249870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.249910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.250169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.250220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.250423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.250463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.250674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.250715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.250908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.250947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.251096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.251136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.251356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.251398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.251604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.251644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.251852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.251893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.252037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.252077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.252357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.252417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.252660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.252709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.252854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.252903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.253086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.253128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.253423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.253658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.253698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.253910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.253950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.254228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.254467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.081 [2024-12-09 05:25:16.254506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.081 qpair failed and we were unable to recover it. 00:30:34.081 [2024-12-09 05:25:16.254646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.254685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.254896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.254936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.255134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.255174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.255408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.255448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.255640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.255680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.255835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.255875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.256078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.256118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.256375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.256416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.256622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.256662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.256863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.256903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.257148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.257318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.257360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.257580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.257619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.257878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.257917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.258099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.258298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.258339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.258616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.258656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.258931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.258971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.259271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.259333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.259549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.259600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.259816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.259863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.260021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.260070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.260287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.260338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.260620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.260669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.260898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.260940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.261147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.261187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.261406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.261447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.261704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.261743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.261895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.261938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.262096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.262137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.262448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.262603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.262642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.262798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.262839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.263028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.263068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.263285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.263518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.263558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.263786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.263825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.264083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.264123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.082 [2024-12-09 05:25:16.264265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.082 [2024-12-09 05:25:16.264307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.082 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.264509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.264809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.264848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.265050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.265089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.265237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.265278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.265480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.265520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.265753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.265793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.265989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.266035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.266244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.266285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.266435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.266474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.266663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.266706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.266841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.266880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.267038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.267077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.267303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.267344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.267512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.267553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.267837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.267876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.268122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.268179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.268477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.268528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.268747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.268795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.269029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.269079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.269299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.269349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.269642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.269694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.269909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.269957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.270105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.270154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.270467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.270545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.270783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.270830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.270990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.271030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.271249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.271290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.271485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.271529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.271735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.271776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.272035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.272075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.272270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.272476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.272516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.083 qpair failed and we were unable to recover it. 00:30:34.083 [2024-12-09 05:25:16.272718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.083 [2024-12-09 05:25:16.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.272983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.273034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.273175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.273232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.273527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.273568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.273830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.273871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.274139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.274179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.274433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.274475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.274687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.275152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.275460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.275505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.275710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.275751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.275903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.275943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.276160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.276201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.276384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.276435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.276663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.276711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.276863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.276910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.277132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.277181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.277385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.277442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.277648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.277793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.277849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.278118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.278388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.278447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.278748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.278797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.278955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.279002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.279235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.279285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.279523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.279571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.279805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.279854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.280014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.280063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.280198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.280261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.280554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.280754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.280794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.281005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.281054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.281292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.281346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.281547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.281588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.281782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.281960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.282001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.282199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.084 [2024-12-09 05:25:16.282311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.084 qpair failed and we were unable to recover it. 00:30:34.084 [2024-12-09 05:25:16.282540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.282585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.282726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.282767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.282904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.282945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.283203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.283265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.283459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.283499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.283660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.283713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.284006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.284054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.284221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.284271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.284530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.284738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.284786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.285077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.285355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.285397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.285618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.285659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.285859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.285900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.286044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.286084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.286294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.286336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.286606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.286647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.286850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.286891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.287108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.287149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.287347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.287389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.287517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.287558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.287747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.287788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.288014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.288054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.288311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.288352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.288588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.288628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.288827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.288867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.289130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.289182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.289493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.289535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.289668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.289707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.289889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.290086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.290127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.290402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.290445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.290599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.290639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.290783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.290824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.291027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.291066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.291222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.291264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.291478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.291521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.291671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.085 [2024-12-09 05:25:16.291712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.085 qpair failed and we were unable to recover it. 00:30:34.085 [2024-12-09 05:25:16.291927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.291967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.292233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.292276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.292406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.292447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.292725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.292765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.292911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.293144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.293191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.293476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.293556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.293734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.293779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.293981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.294022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.294181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.294240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.294466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.294506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.294651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.294895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.294935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.295088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.295129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.295311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.295520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.295561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.295774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.295964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.296004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.296296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.296338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.296552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.296600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.296747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.296786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.296980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.297020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.297175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.297231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.297538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.297578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.297736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.297777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.297970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.298010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.298230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.298272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.298400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.298440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.298646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.298686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.298839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.298878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.299074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.299114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.299308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.299350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.299547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.299594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.299810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.299851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.300062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.300101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.300310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.300352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.300564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.300605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.300814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.300853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.086 [2024-12-09 05:25:16.301049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.086 [2024-12-09 05:25:16.301088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.086 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.301344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.301384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.301703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.301958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.301998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.302273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.302314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.302516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.302557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.302836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.302876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.303136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.303176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.303334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.303375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.303589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.303629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.303818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.303857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.304115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.304155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.304311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.304353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.304554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.304593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.304863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.304903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.305061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.305101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.305255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.305297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.305503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.305543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.305754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.305796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.306002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.306042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.306266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.306308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.306513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.306554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.306821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.306861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.306998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.307038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.307253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.307294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.307419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.307460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.307607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.307647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.307903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.307943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.308095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.308136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.308296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.308337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.308536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.308575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.308769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.308809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.309087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.309128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.309280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.309322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.309532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.309571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.309816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.309895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.310134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.310180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.310468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.310510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.310777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.087 [2024-12-09 05:25:16.310818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.087 qpair failed and we were unable to recover it. 00:30:34.087 [2024-12-09 05:25:16.311008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.311193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.311251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.311519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.311559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.311711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.311954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.311994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.312255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.312463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.312503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.312699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.312740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.312943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.312983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.313239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.313290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.313437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.313477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.313619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.313660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.313854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.313893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.314025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.314065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.314259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.314301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.314434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.314473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.314757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.314797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.314931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.314971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.315106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.315145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.315435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.315477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.315623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.315665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.315854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.315894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.316058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.316270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.316313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.316510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.316550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.316743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.316783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.316990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.317030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.317311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.317353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.317556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.317596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.317881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.317920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.318130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.318170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.318379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.318419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.318579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.318618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.318773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.088 [2024-12-09 05:25:16.318812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.088 qpair failed and we were unable to recover it. 00:30:34.088 [2024-12-09 05:25:16.319040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.319080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.319239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.319280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.319540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.319809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.319854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.320060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.320101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.320298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.320342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.320565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.320606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.320736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.320775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.320906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.320946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.321170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.321225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.321358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.321398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.321595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.321636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.321845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.321886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.322040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.322277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.322318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.322457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.322506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.322767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.322807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.322958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.322998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.323228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.323269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.323489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.323529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.323723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.323763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.323915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.323955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.324181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.324237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.324377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.324418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.324545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.324584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.324732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.324772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.325018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.325058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.325266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.325307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.325569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.325880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.325921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.326081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.326121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.326268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.326309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.326454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.326494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.326714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.326754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.327026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.327067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.327275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.327316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.327510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.327549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.089 [2024-12-09 05:25:16.327774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.089 [2024-12-09 05:25:16.327815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.089 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.328005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.328046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.328257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.328299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.328445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.328743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.328784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.328922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.328963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.329119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.329158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.329379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.329419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.329703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.329743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.329949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.329989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.330192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.330411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.330451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.330643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.330683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.330901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.330941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.331072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.331112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.331405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.331539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.331578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.331708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.331747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.331892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.331933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.332078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.332118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.332269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.332310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.332589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.332629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.332762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.332801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.333085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.333125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.333385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.333427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.333573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.333613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.333894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.334235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.334277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.334559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.334599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.334771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.334811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.334951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.334991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.335185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.335392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.335432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.335627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.335667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.335868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.335908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.336099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.336138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.336295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.336336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.336552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.336593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.336854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.090 [2024-12-09 05:25:16.336893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.090 qpair failed and we were unable to recover it. 00:30:34.090 [2024-12-09 05:25:16.337032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.337072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.337275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.337316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.337601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.337641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.337844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.337884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.338094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.338135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.338291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.338332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.338536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.338588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.338784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.338825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.338954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.338993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.339132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.339172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.339328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.339369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.339648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.339688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.339950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.339989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.340129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.340168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.340375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.340415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.340567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.340606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.340868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.340908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.341121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.341161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.341453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.341494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.341625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.341666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.341942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.341983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.342180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.342231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.342445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.342485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.342676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.342716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.342856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.342896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.343094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.343134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.343423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.343464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.343670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.343916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.343956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.344225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.344266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.344423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.344463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.344697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.344845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.344884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.345082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.345122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.345268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.345309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.345544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.345755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.345795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.346028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.091 [2024-12-09 05:25:16.346068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.091 qpair failed and we were unable to recover it. 00:30:34.091 [2024-12-09 05:25:16.346299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.346340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.346540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.346581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.346784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.346825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.347076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.347299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.347340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.347597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.347638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.347832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.347871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.348067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.348107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.348366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.348414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.348550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.348589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.348784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.348824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.349100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.349139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.349379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.349420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.349577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.349617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.349828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.349867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.350109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.350305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.350347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.350559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.350751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.350791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.350992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.351032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.351172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.351220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.351441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.351481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.351698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.351738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.351875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.351915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.352066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.352105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.352311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.352352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.352598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.352835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.352875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.353070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.353110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.353254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.353296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.353428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.353468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.353615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.353655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.353847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.353886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.354145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.354185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.092 qpair failed and we were unable to recover it. 00:30:34.092 [2024-12-09 05:25:16.354402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.092 [2024-12-09 05:25:16.354443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.354646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.354686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.354897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.354937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.355199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.355390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.355429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.355687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.355727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.355860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.355899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.356040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.356079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.356305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.356346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.356492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.356532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.356666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.356705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.356838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.356878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.357071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.357111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.357334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.357376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.357606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.357652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.357839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.357879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.358124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.358163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.358312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.358353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.358557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.358596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.358800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.358839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.359036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.359076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.359204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.359253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.359546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.359782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.359821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.360022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.360061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.360272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.360466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.360506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.360713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.360753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.361017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.361057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.361326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.361367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.361627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.361668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.361924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.361964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.362235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.362494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.362535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.362816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.362856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.362992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.363032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.363312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.363354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.363493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.363533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.093 qpair failed and we were unable to recover it. 00:30:34.093 [2024-12-09 05:25:16.363724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.093 [2024-12-09 05:25:16.363764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.363952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.363992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.364253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.364294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.364497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.364537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.364726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.364766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.364959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.364998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.365257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.365602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.365642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.365834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.365874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.366064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.366104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.366300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.366341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.366574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.366816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.367027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.367067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.367218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.367259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.367488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.367786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.367831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.368115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.368154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.368426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.368467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.368675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.368871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.368911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.369169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.369219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.369484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.369795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.369835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.370111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.370151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.370317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.370359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.370611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.370800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.370839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.371041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.371082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.371365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.371406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.371641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.371899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.371940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.372150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.372191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.372347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.372387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.372624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.372869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.372909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.373172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.373241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.373530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.373569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.094 qpair failed and we were unable to recover it. 00:30:34.094 [2024-12-09 05:25:16.373724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.094 [2024-12-09 05:25:16.373763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.373913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.373953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.374228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.374270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.374504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.374545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.374737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.374778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.374975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.375015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.375229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.375270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.375495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.375534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.375759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.375799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.375999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.376038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.376318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.376359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.376548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.376740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.376779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.377083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.377123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.377384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.377426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.377570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.377610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.377908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.377948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.378157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.378196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.378484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.378532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.378735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.378774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.378969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.379009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.379218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.379260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.379409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.379449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.379731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.379772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.380025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.380064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.380268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.380308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.380521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.380561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.380728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.380767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.380990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.381029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.381242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.381283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.381487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.381526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.381750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.381790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.382004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.382044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.382290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.382330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.382917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.382958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.383216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.383256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.383576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.095 [2024-12-09 05:25:16.383789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.095 [2024-12-09 05:25:16.383830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.095 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.384032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.384072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.384303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.384344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.384551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.384591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.384868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.384908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.385155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.385195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.385430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.385470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.385755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.385796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.386040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.386252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.386293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.386543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.386747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.386788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.387047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.387087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.387300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.387341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.387558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.387597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.387752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.387792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.388004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.388044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.388274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.388315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.388610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.388650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.388947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.388987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.389137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.389188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.389355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.389601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.389640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.389796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.389835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.390093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.390133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.390339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.390380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.390596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.390635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.390777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.390816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.391034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.391075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.391287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.391329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.391608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.391648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.391974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.392015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.392233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.392274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.392490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.392530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.392752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.392792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.393095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.393134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.393379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.393423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.393642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.096 [2024-12-09 05:25:16.393682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.096 qpair failed and we were unable to recover it. 00:30:34.096 [2024-12-09 05:25:16.393915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.393954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.394226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.394267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.394507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.394548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.394761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.394801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.395003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.395043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.395303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.395345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.395577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.395617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.395904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.395944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.396183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.396236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.396398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.396438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.396617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.396657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.396816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.396856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.397135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.397175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.397465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.397506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.397725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.397765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.398023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.398063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.398322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.398363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.398627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.398972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.399012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.399277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.399317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.399597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.399635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.399864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.399904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.400118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.400165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.400384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.400424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.400584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.400624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.400922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.400963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.401253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.401295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.401583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.401624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.401902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.401942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.402229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.402270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.402484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.402524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.402734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.402773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.403013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.097 [2024-12-09 05:25:16.403332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.097 [2024-12-09 05:25:16.403375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.097 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.403594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.403633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.403863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.403902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.404220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.404260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.404556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.404596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.404860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.404901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.405177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.405244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.405528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.405733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.405773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.406069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.406109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.406378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.406420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.406645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.406685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.406966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.407227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.407269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.407464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.407504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.407780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.407819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.408095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.408136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.408437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.408479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.408738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.408778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.408991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.409031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.409264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.409306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.409595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.409635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.409892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.409932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.410227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.410268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.410549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.410589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.410801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.410841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.411042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.411081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.411295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.411336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.411645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.411685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.411919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.411964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.412176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.412224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.412538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.412578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.412777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.412816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.413018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.413057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.413331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.413374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.413630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.413670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.413897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.413937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.414246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.414287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.098 [2024-12-09 05:25:16.414592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.098 [2024-12-09 05:25:16.414632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.098 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.414833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.414872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.415145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.415184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.415519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.415747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.415786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.415999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.416039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.416350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.416391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.416664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.416923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.416963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.417270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.417313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.417583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.417623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.417854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.417894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.418153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.418192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.418353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.418393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.418539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.418578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.418819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.418859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.419086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.419126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.419339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.419380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.419692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.419732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.420034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.420073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.420348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.420389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.420550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.420589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.420858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.420898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.421040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.421080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.421291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.421643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.421683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.421996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.422037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.422244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.422285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.422568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.422608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.422861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.422901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.423115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.423154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.423470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.423518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.423681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.423913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.423953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.424258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.424299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.424509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.424550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.424829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.424886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.425161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.425200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.099 [2024-12-09 05:25:16.425513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.099 [2024-12-09 05:25:16.425554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.099 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.425849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.425890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.426047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.426086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.426304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.426346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.426657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.426906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.426945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.427231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.427272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.427565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.427605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.427897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.427937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.428131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.428171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.428329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.428370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.428651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.428691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.428989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.429029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.429261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.429302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.429536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.429882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.429921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.430220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.430262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.430588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.430808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.430847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.431061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.431101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.431251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.431294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.431537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.431577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.431854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.431893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.432105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.432145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.432353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.432394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.432683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.432723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.432959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.432999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.433275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.433315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.433531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.433571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.433852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.433893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.434158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.434412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.434452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.434663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.434703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.434923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.434970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.435200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.435271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.435545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.435584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.435860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.435900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.436201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.436254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.100 [2024-12-09 05:25:16.436466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.100 [2024-12-09 05:25:16.436506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.100 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.436819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.436859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.437168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.437219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.437458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.437618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.437658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.437920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.437959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.438273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.438507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.438546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.438752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.438792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.439085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.439125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.439339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.439380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.439656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.439696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.439938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.439978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.440187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.440238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.440508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.440772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.440812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.441120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.441160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.441364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.441405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.441671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.441711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.441997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.442037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.442338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.442379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.442666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.442707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.442988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.443262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.443303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.443571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.443611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.443905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.443946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.444228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.444269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.444537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.444577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.444849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.445135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.445174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.445396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.445437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.445713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.445754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.446026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.446066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.446377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.446419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.101 qpair failed and we were unable to recover it. 00:30:34.101 [2024-12-09 05:25:16.446737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.101 [2024-12-09 05:25:16.446777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.447002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.447055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.447252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.447294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.447569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.447609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.447768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.447808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.448093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.448133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.448418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.448460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.448748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.448788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.449078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.449117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.449317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.449358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.449619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.449659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.449945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.449986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.450249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.450291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.450574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.450894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.450934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.451245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.451287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.451553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.451594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.451856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.451896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.452199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.452249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.452466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.452506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.452815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.452854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.453049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.453088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.453366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.453407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.453642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.453683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.453899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.453938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.454137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.454177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.454476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.454518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.454725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.454766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.454995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.455036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.455346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.455388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.455625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.455665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.455966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.456006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.456223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.456265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.456586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.456626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.456864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.456905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.457197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.457248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.457524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.457565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.457874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.457915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.102 qpair failed and we were unable to recover it. 00:30:34.102 [2024-12-09 05:25:16.458146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.102 [2024-12-09 05:25:16.458187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.458358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.458399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.458684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.458725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.459039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.459296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.459338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.459620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.459660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.459801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.459841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.460151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.460191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.460425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.460466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.460777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.460816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.461026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.461066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.461339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.461381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.461652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.461692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.461902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.461942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.462151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.462192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.462419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.462460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.462691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.463037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.463231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.463274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.463549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.463588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.463889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.463929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.464145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.464186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.464393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.464434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.464729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.465079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.465119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.465356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.465397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.465703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.465743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.465951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.465991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.466300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.466341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.466643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.466683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.466886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.466926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.467253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.467296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.467602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.467642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.467951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.467992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.468149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.468188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.468438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.468485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.468773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.468814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.469088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.469137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.103 qpair failed and we were unable to recover it. 00:30:34.103 [2024-12-09 05:25:16.469452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.103 [2024-12-09 05:25:16.469496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.469716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.469779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.470078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.470126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.470452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.470503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.470814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.471186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.471251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.471549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.471608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.471920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.471970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.472284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.472336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.472655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.472704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.473017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.473065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.473373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.473725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.473774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.474104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.474152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.474474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.474525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.474842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.474915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.475191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.475284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.475595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.475644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.475941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.475990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.476243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.476293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.476600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.476897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.476949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.477255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.477305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.477631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.477681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.477995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.478047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.478379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.478440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.478682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.478731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.479056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.479105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.479419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.479478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.479711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.479760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.480059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.480116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.480468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.480528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.480839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.480896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.481206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.481273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.481517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.481574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.481892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.481940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.482173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.482246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.482549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.482605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.104 qpair failed and we were unable to recover it. 00:30:34.104 [2024-12-09 05:25:16.482905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.104 [2024-12-09 05:25:16.482954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.483274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.483326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.483649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.483707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.484020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.484068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.484381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.484431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.484734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.484784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.485065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.485353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.485407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.485717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.485774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.486085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.486134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.486425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.486475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.486704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.487082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.487131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.487460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.487511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.487748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.488089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.488139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.488414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.488465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.488774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.488823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.489130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.489180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.489508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.489557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.489875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.489925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.490171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.490544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.490593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.490906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.490956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.491260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.491311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.491534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.491585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.491890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.491944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.492261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.492318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.492631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.492689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.493034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.493342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.493399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.493712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.493770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.494005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.494374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.494429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.494745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.494802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.495118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.495170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.495430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.495491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.495783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.495842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.105 [2024-12-09 05:25:16.496175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.105 [2024-12-09 05:25:16.496246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.105 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.496572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.496623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.496861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.496918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.497184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.497250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.497561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.497612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.497979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.498305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.498357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.498677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.498727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.499051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.499106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.499456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.499507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.499816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.499875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.500107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.500156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.500492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.500542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.500874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.500924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.501148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.501196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.501522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.501572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.501894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.501985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.502320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.502372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.502676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.502719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.502955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.502999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.503306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.503348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.503656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.503699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.503978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.504028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.504269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.504317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.504613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.504662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.505004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.505050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.505367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.505415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.505688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.505739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.505993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.506043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.506360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.506413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.506659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.506994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.507045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.507365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.507420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.507664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.507714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.106 [2024-12-09 05:25:16.507960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.106 [2024-12-09 05:25:16.508018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.106 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.508323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.508378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.508615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.508673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.508986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.509040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.509304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.509359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.509652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.509704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.510019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.510073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.510330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.510380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.510687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.510736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.511067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.511116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.511441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.511486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.511738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.511788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.512097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.512150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.512481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.512533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.512842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.512893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.513198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.513263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.513461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.513515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.513748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.513797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.514021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.514070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.514402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.514458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.514765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.514815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.515143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.515192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.515498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.515549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.515812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.515861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.516101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.516151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.516458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.516546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.516930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.517014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.517380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.517449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.517782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.517836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.518142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.518190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.518541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.518591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.518897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.518946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.519173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.519234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.519474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.519525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.519831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.519878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.520093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.520141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.520413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.520457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.520756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.520796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.107 qpair failed and we were unable to recover it. 00:30:34.107 [2024-12-09 05:25:16.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.107 [2024-12-09 05:25:16.521064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.521384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.521428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.521678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.521719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.522016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.522415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.522711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.522753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.523038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.523079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.523353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.523395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.523689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.523729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.524022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.524064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.524359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.524401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.524713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.525071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.525413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.525708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.525749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.526050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.526090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.526441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.526736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.527096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.527137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.527391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.527434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.527691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.527854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.527894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.528189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.528238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.528526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.528567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.528719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.528759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.528978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.529019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.529248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.529292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.529441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.529787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.529828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.530154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.530418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.530460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.530773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.530814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.108 [2024-12-09 05:25:16.531107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.108 [2024-12-09 05:25:16.531148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.108 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.531427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.531470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.531692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.531733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.532050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.532091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.532420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.532461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.532705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.532746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.533071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.533112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.533361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.533404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.533707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.533747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.534001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.534041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.534342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.382 [2024-12-09 05:25:16.534384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.382 qpair failed and we were unable to recover it. 00:30:34.382 [2024-12-09 05:25:16.534685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.534726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.535007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.535055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.535375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.535418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.535641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.535682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.536003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.536043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.536353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.536396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.536714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.537011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.537051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.537230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.537281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.537520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.537560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.537779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.537819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.538096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.538137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.538461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.538503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.538728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.538769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.539016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.539056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.539338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.539381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.539646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.539687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.539920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.539960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.540280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.540322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.540639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.540680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.540995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.541231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.541285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.541513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.541554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.541849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.541890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.542158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.542199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.542526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.542567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.542862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.542903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.543225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.543268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.543568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.543609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.543904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.543945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.544238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.544483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.544523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.544811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.544851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.545075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.545116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.545338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.545382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.545665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.545706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.545998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.383 [2024-12-09 05:25:16.546038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.383 qpair failed and we were unable to recover it. 00:30:34.383 [2024-12-09 05:25:16.546314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.546356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.546566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.546606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.546900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.546940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.547143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.547184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.547531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.547876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.547917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.548160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.548201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.548541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.548583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.548829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.548870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.549118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.549158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.549465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.549807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.549848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.550120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.550160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.550466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.550509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.550806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.550847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.551064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.551104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.551350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.551393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.551678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.551719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.551998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.552040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.552336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.552712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.553008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.553048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.553345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.553386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.553628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.553669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.553971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.554012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.554228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.554269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.554590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.554631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.554852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.554893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.555139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.555180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.555478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.555521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.555739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.555779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.556107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.556147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.556421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.556768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.557114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.557154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.557455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.557497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.557697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.557737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.384 [2024-12-09 05:25:16.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.384 [2024-12-09 05:25:16.558066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.384 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.558367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.558408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.558746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.559059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.559100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.559318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.559361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.559637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.559677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.559890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.559931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.560164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.560220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.560422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.560646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.560687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.560903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.560942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.561177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.561227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.561432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.561474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.561768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.561808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.562104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.562145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.562454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.562497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.562643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.562684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.562956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.563273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.563317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.563628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.563669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.563904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.563944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.564185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.564250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.564474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.564515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.564834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.564875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.565174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.565226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.565527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.565863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.565904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.566344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.566385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.566608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.566649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.566874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.566914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.567064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.567104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.567402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.567447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.567739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.567779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.568096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.568137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.568388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.568430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.568600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.568641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.568930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.385 [2024-12-09 05:25:16.568970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.385 qpair failed and we were unable to recover it. 00:30:34.385 [2024-12-09 05:25:16.569216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.569258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.569577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.569618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.569920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.569961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.570163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.570436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.570478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.570781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.570821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.571159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.571200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.571456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.571498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.571792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.571833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.572073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.572119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.572398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.572440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.572722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.572763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.573073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.573114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.573407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.573449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.573759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.573800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.574030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.574070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.574287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.574328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.574660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.574949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.574989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.575282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.575510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.575551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.575842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.575882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.576157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.576579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.576621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.576894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.576934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.577238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.577281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.577579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.577620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.577901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.577942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.578241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.578282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.578501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.578541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.578767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.578808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.579028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.579067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.579373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.579417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.579753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.579795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.580001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.580041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.580368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.580410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.580742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.580784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.386 [2024-12-09 05:25:16.581016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.386 [2024-12-09 05:25:16.581056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.386 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.581378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.581420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.581643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.581685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.582001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.582041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.582407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.582746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.583014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.583055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.583352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.583395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.583695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.583736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.584043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.584083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.584324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.584366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.584608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.585004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.585301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.585344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.585623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.585664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.585957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.585999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.586294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.586335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.586645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.586687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.586891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.586931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.587234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.587288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.587588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.587629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.587854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.587894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.588192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.588245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.588527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.588569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.588817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.588857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.589071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.589112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.589436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.589479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.589775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.589815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.590114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.590155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.590468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.590511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.590658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.590698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.590986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.591027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.591320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.387 [2024-12-09 05:25:16.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.387 qpair failed and we were unable to recover it. 00:30:34.387 [2024-12-09 05:25:16.591661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.591701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.592014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.592055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.592351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.592394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.592707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.592747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.593022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.593063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.593353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.593397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.593606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.593863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.593903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.594123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.594164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.594488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.594530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.594829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.594869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.595162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.595203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.595536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.595578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.595874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.595914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.596236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.596278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.596579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.596620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.596914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.596954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.597270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.597313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.597568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.597609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.597883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.597930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.598229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.598272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.598574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.598614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.598916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.598956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.599262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.599305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.599523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.599564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.599845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.599887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.600185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.600516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.600557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.600856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.600898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.601192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.601247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.601452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.601492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.601764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.601805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.602122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.602163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.602428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.602469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.602751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.602792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.603085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.603126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.603382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.603425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.388 qpair failed and we were unable to recover it. 00:30:34.388 [2024-12-09 05:25:16.603739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.388 [2024-12-09 05:25:16.603780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.604007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.604047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.604362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.604405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.604703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.604744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.605041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.605082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.605371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.605413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.605559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.605600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.605901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.605940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.606188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.606239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.606549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.606592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.606978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.607018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.607314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.607357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.607668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.607710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.608002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.608042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.608355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.608398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.608672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.608713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.609012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.609053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.609329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.609370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.609646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.609687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.610005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.610247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.610288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.610588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.610629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.610929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.610977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.611232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.611285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.611561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.611601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.611875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.611915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.612152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.612192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.612508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.612550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.612885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.612927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.613203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.613573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.613614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.613888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.613928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.614136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.614177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.614475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.614517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.614759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.615030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.615071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.615416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.389 [2024-12-09 05:25:16.615649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.389 [2024-12-09 05:25:16.615689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.389 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.616003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.616044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.616348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.616390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.616703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.616744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.616945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.616986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.617284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.617325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.617614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.617655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.617901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.617942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.618247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.618288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.618588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.618629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.618853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.618894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.619097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.619138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.619472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.619516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.619794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.619834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.620133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.620174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.620491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.620533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.620809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.620850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.621146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.621187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.621498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.621540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.621814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.622161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.622202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.622549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.622590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.622882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.622923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.623243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.623290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.623548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.623589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.623873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.623920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.624146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.624187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.624333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.624375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.624668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.624709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.624931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.624972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.625253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.625295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.625626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.625900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.625941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.626203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.626254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.626552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.626592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.626888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.626929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.627149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.627190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.390 qpair failed and we were unable to recover it. 00:30:34.390 [2024-12-09 05:25:16.627519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.390 [2024-12-09 05:25:16.627561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.627807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.627848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.628080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.628122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.628412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.628455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.628748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.628788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.629062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.629103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.629401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.629442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.629731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.629771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.630021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.630340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.630382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.630644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.630684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.630970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.631010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.631260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.631303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.631579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.631619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.631939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.632225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.632274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.632497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.632538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.632810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.632851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.633026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.633067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.633377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.633419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.633756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.634052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.634093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.634327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.634368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.634694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.634734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.634970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.635010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.635323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.635366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.635600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.635641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.635875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.635916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.636246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.636288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.636600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.636640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.636946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.636987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.637225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.637267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.637576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.637616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.637913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.637953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.391 qpair failed and we were unable to recover it. 00:30:34.391 [2024-12-09 05:25:16.638263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.391 [2024-12-09 05:25:16.638306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.638529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.638570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.638866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.638906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.639184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.639246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.639537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.639577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.639875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.640239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.640281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.640608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.640648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.640934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.640974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.641298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.641340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.641569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.641609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.641927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.641968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.642242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.642284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.642584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.642625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.642871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.642912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.643224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.643282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.643581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.643622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.643926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.643967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.644189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.644257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.644581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.644621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.644848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.644889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.645187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.645246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.645538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.645834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.645875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.646151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.646192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.646426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.646467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.646690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.646731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.647053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.647094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.647394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.647438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.647737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.647777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.648018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.648059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.648293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.648336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.648637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.648677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.648882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.648923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.649224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.649266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.649558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.649599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.392 [2024-12-09 05:25:16.649871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.392 [2024-12-09 05:25:16.649912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.392 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.650186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.650251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.650545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.650585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.650857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.650898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.651141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.651182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.651515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.651557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.651852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.651892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.652116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.652156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.652458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.652501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.652808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.652848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.653124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.653165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.653471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.653513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.653794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.653834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.654109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.654149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.654449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.654491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.654663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.654703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.655361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.655404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.655703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.655743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.656052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.656092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.656315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.656357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.656581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.656908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.656948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.657196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.657247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.657537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.657578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.657800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.657847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.658125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.658166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.658476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.658516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.658811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.658852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.659169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.659226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.659551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.659593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.659893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.659934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.660251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.660294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.660629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.660671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.661017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.661267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.661309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.661607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.661647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.393 qpair failed and we were unable to recover it. 00:30:34.393 [2024-12-09 05:25:16.661953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.393 [2024-12-09 05:25:16.661993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.662202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.662252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.662525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.662864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.662905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.663106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.663146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.663442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.663485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.663777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.663818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.664106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.664146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.664473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.664515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.664814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.664854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.665165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.665220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.665444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.665484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.665781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.665822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.666119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.666160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.666411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.666454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.666686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.666727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.667028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.667069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.667376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.667419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.667656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.667697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.668001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.668040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.668340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.668381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.668692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.668732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.668970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.669010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.669282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.669324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.669602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.669642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.669933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.669973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.670177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.670227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.670549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.670590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.670880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.670926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.671236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.671282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.671578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.671619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.671914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.671953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.672205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.672273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.672574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.672614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.672908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.672949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.673257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.673300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.673600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.673641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.394 qpair failed and we were unable to recover it. 00:30:34.394 [2024-12-09 05:25:16.673947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.394 [2024-12-09 05:25:16.673987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.674267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.674309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.674554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.674594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.674868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.674908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.675220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.675280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.675510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.675551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.675715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.675756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.675958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.675999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.676272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.676314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.676536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.676578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.676894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.676935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.677167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.677221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.677382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.677424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.677716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.677757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.678048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.678088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.678386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.678428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.678721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.678762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.679081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.679122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.679372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.679415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.679712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.679753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.680028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.680069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.680369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.680411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.680703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.680744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.681057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.681097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.681393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.681435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.681688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.681975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.682016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.682309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.682351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.682623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.682663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.682894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.682936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.683222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.683274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.683587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.683634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.683949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.683989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.684287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.684329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.684652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.684988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.685030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.685270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.685311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.685618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.395 [2024-12-09 05:25:16.685658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.395 qpair failed and we were unable to recover it. 00:30:34.395 [2024-12-09 05:25:16.685951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.685992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.686158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.686199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.686531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.686573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.686864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.686905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.687147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.687187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.687499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.687541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.687867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.687908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.688235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.688277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.688502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.688543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.688842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.688883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.689179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.689232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.689509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.689550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.689846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.689888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.690095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.690135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.690425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.690467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.690766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.690807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.691024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.691064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.691342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.691386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.691681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.691721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.692018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.692059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.692318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.692360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.692652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.692692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.693001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.693042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.693267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.693309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.693583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.693624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.693882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.693923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.694167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.694221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.694518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.694559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.694856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.694897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.695194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.695562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.695602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.695850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.695891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.696192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.696263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.396 qpair failed and we were unable to recover it. 00:30:34.396 [2024-12-09 05:25:16.696536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.396 [2024-12-09 05:25:16.696583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.696878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.696918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.697194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.697249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.697571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.697612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.697853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.697893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.698199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.698252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.698586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.698627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.698939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.698980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.699263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.699308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.699595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.699841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.699881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.700201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.700252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.700505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.700546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.700846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.700887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.701190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.701241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.701583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.701623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.701967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.702253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.702295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.702511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.702552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.702793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.702834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.703110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.703151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.703483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.703825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.703865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.704177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.704238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.704538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.704580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.704873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.704913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.705233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.705275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.705628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.705969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.706009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.706254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.706296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.706578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.706937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.706977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.707274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.707318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.707630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.707672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.707819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.707860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.708149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.708189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.708502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.708543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.397 [2024-12-09 05:25:16.708834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.397 [2024-12-09 05:25:16.708875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.397 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.709184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.709239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.709516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.709557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.709783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.709829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.710072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.710113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.710415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.710457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.710797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.711011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.711052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.711388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 662380 Killed "${NVMF_APP[@]}" "$@" 00:30:34.398 [2024-12-09 05:25:16.711665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.711706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.712002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.712044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.712258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.712299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:34.398 [2024-12-09 05:25:16.712502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.712544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:34.398 [2024-12-09 05:25:16.712845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.712886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.398 [2024-12-09 05:25:16.713123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.713164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.398 [2024-12-09 05:25:16.713497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.713540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.398 [2024-12-09 05:25:16.713811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.713852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.714148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.714188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.714418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.714460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.714736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.714777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.715015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.715056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.715298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.715641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.715681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.716002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.716332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.716375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.716641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.716681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.716898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.716940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.717222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.717270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.717564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.717605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.717877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.717918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.718233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.718276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.718525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.718565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.718856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.718898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.719133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.719174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.719489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.719532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.398 [2024-12-09 05:25:16.719766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.398 [2024-12-09 05:25:16.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.398 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.720116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.720159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.720429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.720479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.720700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.720748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.721102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.721419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.721462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.721770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.721811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.722108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.722149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.722517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=663201 00:30:34.399 [2024-12-09 05:25:16.722812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.722866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 663201 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:34.399 [2024-12-09 05:25:16.723183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.723253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 663201 ']' 00:30:34.399 [2024-12-09 05:25:16.723511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.399 [2024-12-09 05:25:16.723803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.723845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.399 [2024-12-09 05:25:16.724085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.724127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.399 [2024-12-09 05:25:16.724370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.399 [2024-12-09 05:25:16.724417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.399 [2024-12-09 05:25:16.724829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.724916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 05:25:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.725196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.725273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.725518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.725571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.725794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.725845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.726087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.726137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.726395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.726456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.726780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.726831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.727079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.727133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.727385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.727449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.727726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.727772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.728086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.728128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.728490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.728743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.728786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.729015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.729056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.729349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.729392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.729568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.729608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.729864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.729907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.730198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.730251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.730493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.399 [2024-12-09 05:25:16.730536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.399 qpair failed and we were unable to recover it. 00:30:34.399 [2024-12-09 05:25:16.730834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.730885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.731116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.731186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.731549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.731607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.731885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.731942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.732181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.732249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.732527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.732579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.732881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.732937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.733255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.733306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.733535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.733587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.733772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.733822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.734058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.734108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.734360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.734410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.734612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.734668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.734859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.734908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.735086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.735135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.735476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.735528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.735761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.735819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.736174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.736246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.736565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.736615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.736851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.737198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.737279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.737568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.737618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.737955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.738205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.738273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.738537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.738863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.738911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.739133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.739182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.739439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.739483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.739792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.739833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.740015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.740055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.740362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.740406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.740689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.740731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.740968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.741010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.741323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.741366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.741686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.741727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.741999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.742042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.742287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.742329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.400 [2024-12-09 05:25:16.742545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.400 [2024-12-09 05:25:16.742585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.400 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.742740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.742780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.743081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.743138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.743463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.743507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.743751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.743792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.744014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.744055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.744312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.744354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.744517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.744558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.744789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.744829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.745051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.745092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.745330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.745373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.745609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.745650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.745835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.746102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.746142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.746430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.746472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.746700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.746741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.747020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.747062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.747275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.747318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.747638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.747681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.747998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.748044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.748269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.748315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.748595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.748642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.748876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.748917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.749121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.749169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.749416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.749469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.749691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.749732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.749952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.749994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.750238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.750280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.750541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.750585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.750885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.750936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.751109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.751158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.751418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.751483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.751654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.751706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.751978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.752026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.401 qpair failed and we were unable to recover it. 00:30:34.401 [2024-12-09 05:25:16.752175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.401 [2024-12-09 05:25:16.752251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.752536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.752586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.752883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.752933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.753169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.753233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.753519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.753570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.753820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.753869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.754183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.754259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.754483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.754536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.754783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.754833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.755061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.755110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.755364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.755417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.755637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.755688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.755932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.755992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.756255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.756306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.756533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.756586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.756849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.756899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.757126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.757477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.757519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.757733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.757776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.758077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.758119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.758313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.758357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.758578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.758619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.758832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.758873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.759027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.759069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.759300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.759345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.759505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.759546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.759747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.759788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.759956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.759997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.760196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.760252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.760468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.760517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.760742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.760783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.761093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.761134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.761380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.761650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.761692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.761918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.761959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.762182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.762240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.762465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.762506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.402 [2024-12-09 05:25:16.762730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.402 [2024-12-09 05:25:16.762784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.402 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.763013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.763061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.763228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.763283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.763510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.763551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.763768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.763809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.764078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.764118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.764399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.764442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.764600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.764642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.764858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.764900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.765237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.765280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.765518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.765562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.765785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.765826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.765974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.766017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.766180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.766237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.766384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.766426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.766734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.766785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.767070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.767120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.767397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.767443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.767649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.767695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.767992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.768036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.768318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.768514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.768554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.768763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.768804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.769009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.769263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.769308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.769530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.769574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.769716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.769757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.770031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.770073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.770282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.770327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.770480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.770522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.770761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.771124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.771418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.771485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.771793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.771842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.772075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.772124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.772387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.772432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.772666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.772716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.403 [2024-12-09 05:25:16.772894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.403 [2024-12-09 05:25:16.772943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.403 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.773249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.773299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.773608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.773657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.773950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.773999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.774223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.774273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.774498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.774546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.774839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.774889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.775123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.775172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.775418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.775460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.775745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.775910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.775951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.776108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.776149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.776365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.776734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.776783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.776952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.777001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.777229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.777280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.777522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.777566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.777867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.777909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.778199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.778267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.778414] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:34.404 [2024-12-09 05:25:16.778426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.778475] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.404 [2024-12-09 05:25:16.778478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.778731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.778777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.778930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.779350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.779559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.779599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.779737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.780001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.780056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.780401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.780492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.780804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.780850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.781072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.781114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.781432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.781475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.781637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.781679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.781883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.781924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.782137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.782177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.782343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.782385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.782604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.782876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.782917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.783123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.404 [2024-12-09 05:25:16.783176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.404 qpair failed and we were unable to recover it. 00:30:34.404 [2024-12-09 05:25:16.783488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.783531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.783776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.783817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.784104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.784145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.784304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.784348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.784628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.784669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.784903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.784943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.785157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.785199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.785485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.785525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.785803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.785843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.786057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.786099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.786328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.786371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.786648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.786689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.786911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.786952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.787223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.787265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.787468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.787509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.787741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.787780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.787999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.788040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.788252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.788565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.788606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.788880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.788920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.789068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.789109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.789319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.789362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.789560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.789601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.789913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.789955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.790232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.790280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.790501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.790542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.790757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.790798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.791280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.791321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.791535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.791576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.791887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.791935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.792136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.792177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.792407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.792449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.792603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.405 [2024-12-09 05:25:16.792643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.405 qpair failed and we were unable to recover it. 00:30:34.405 [2024-12-09 05:25:16.792936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.792976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.793150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.793192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.793404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.793444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.793715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.793755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.794058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.794099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.794318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.794589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.794630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.794763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.794804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.795087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.795128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.795425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.795467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.795676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.795717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.795878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.795920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.796134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.796175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.796338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.796379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.796517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.796557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.796689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.796730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.796970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.797010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.797322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.797364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.797564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.797605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.797753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.798037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.798077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.798234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.798276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.798493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.798535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.798801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.798841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.799054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.799095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.799343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.799387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.799591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.799632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.799784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.799825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.800026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.800067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.800291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.800448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.800489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.800754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.800802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.801044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.801084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.801389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.801431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.801575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.801615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.801922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.801961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.802232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.406 qpair failed and we were unable to recover it. 00:30:34.406 [2024-12-09 05:25:16.802434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.406 [2024-12-09 05:25:16.802475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.802739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.802780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.803026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.803066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.803278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.803319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.803467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.803507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.803751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.803899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.803940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.804159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.804199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.804441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.804483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.804686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.804726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.804869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.804910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.805197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.805251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.805548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.805588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.805803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.805844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.806147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.806369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.806412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.806561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.806602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.806742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.806783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.807001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.807041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.807329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.807371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.807563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.807609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.807880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.808191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.808243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.808469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.808510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.808661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.808701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.808990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.809030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.809257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.809300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.809498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.809692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.809732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.809937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.809982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.810199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.810248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.810512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.810553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.810824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.810863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.811080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.811120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.811428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.811470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.811741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.811782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.812044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.812084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.812300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.812342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.407 [2024-12-09 05:25:16.812555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.407 [2024-12-09 05:25:16.812595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.407 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.812808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.812849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.813082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.813122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.813432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.813474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.813780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.813821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.814014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.814056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.814269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.814311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.814449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.814490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.814703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.814744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.814893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.814933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.815252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.815445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.815486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.815694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.815734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.815944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.815984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.816243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.816461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.816501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.816755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.816973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.817013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.817230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.817274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.817537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.817577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.817866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.817905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.818135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.818176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.818341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.818383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.818587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.818627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.819103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.819144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.819472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.819515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.819835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.820088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.820147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.820424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.820466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.820669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.820709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.820992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.821266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.821308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.821448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.821489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.821723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.821764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.821979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.822018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.822223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.822265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.822531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.822573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.408 [2024-12-09 05:25:16.822779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.408 [2024-12-09 05:25:16.822819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.408 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.823083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.823124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.823398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.823680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.823720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.823917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.823963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.824265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.824306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.824533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.824820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.824860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.825071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.825111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.825382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.825423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.825587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.825627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.825815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.825944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.825985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.826136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.826177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.826520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.826564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.826854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.826895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.827110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.827151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.827313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.827575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.827618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.827810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.828075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.828115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.828328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.828371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.828632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.828673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.828926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.828966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.829232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.829273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.829395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.829562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.829603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.829734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.829775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.830034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.830095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.830304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.830348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.830563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.830603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.830867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.830907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.831193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.831249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.831413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.831455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.831668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.831708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.831995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.832036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.832182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.832234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.832379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.409 [2024-12-09 05:25:16.832420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.409 qpair failed and we were unable to recover it. 00:30:34.409 [2024-12-09 05:25:16.832680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.832720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.832916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.832966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.833305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.833554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.833601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.833824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.833867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.834157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.834198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.834445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.834486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.834702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.834748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.835067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.835110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.835350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.835391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.835604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.835647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.835787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.835829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.836091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.836132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.410 [2024-12-09 05:25:16.836433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.410 [2024-12-09 05:25:16.836474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.410 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.836690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.836731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.836996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.837154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.837198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.837433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.837475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.837689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.837731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.837964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.838005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.838247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.838289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.838431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.838471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.838682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.838722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.838918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.838958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.839103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.839458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.839499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.839694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.839734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.839892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.840144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.840184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.840337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.840385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.840582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.840623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.840910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.840950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.841142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.841182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.841460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.841501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.841647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.841687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.841894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.841940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.842153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.842195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.842427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.842731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.842771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.842930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.842970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.843097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.843137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.843371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.685 [2024-12-09 05:25:16.843413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.685 qpair failed and we were unable to recover it. 00:30:34.685 [2024-12-09 05:25:16.843712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.843752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.844051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.844093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.844289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.844331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.844531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.844572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.844846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.845072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.845112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.845419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.845461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.845684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.845725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.845949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.845989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.846123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.846164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.846481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.846523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.846815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.846855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.847180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.847451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.847493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.847731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.847783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.847989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.848030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.848176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.848229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.848381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.848421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.848548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.848588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.848733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.848774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.848965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.849005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.849139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.849179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.849391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.849432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.849715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.849755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.849969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.850010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.850297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.850339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.850531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.850572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.850701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.850742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.851017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.851098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.851486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.851541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.851787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.851837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.852059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.852107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.852356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.852407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.852694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.852748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.852937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.853130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.853170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.853446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.853523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.853702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.686 [2024-12-09 05:25:16.853754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.686 qpair failed and we were unable to recover it. 00:30:34.686 [2024-12-09 05:25:16.854064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.854115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.854320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.854377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.854688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.854745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.854972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.855306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.855348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.855560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.855601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.855860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.855900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.856092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.856131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.856368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.856411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.856620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.856910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.856950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.857196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.857252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.857513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.857554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.857764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.857804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.857940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.857980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.858259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.858301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.858589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.858630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.858898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.858939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.859127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.859379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.859420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.859722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.859764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.860053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.860094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.860235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.860276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.860560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.860600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.860859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.860900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.861190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.861256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.861419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.861460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.861730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.861770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.861980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.862020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.862242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.862484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.862524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.862733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.862773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.862981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.863022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.863269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.863310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.863442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.863482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.863744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.863784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.863915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.863956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.687 [2024-12-09 05:25:16.864263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.687 [2024-12-09 05:25:16.864304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.687 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.864509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.864549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.864829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.864870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.865083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.865123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.865397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.865438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.865704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.865744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.866032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.866072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.866357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.866414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.866701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.866987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.867027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.867241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.867284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.867495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.867536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.867689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.867729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.867940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.867981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.868247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.868289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.868500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.868540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.868804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.868844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.869105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.869144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.869306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.869348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.869484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.869524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.869655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.869704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.869970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.870010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.870279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.870326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.870543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.870583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.870807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.870847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.871126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.871167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.871307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.871348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.871571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.871611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.871845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.871887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.872077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.872117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.872311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.872353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.872653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.872692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.872888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.872928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.873194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.873246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.873557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.873597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.873868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.873908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.874114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.874154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.874425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.688 [2024-12-09 05:25:16.874467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.688 qpair failed and we were unable to recover it. 00:30:34.688 [2024-12-09 05:25:16.874732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.874771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.875090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.875130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.875301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.875342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.875618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.875658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.875918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.875959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.876228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.876270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.876496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.876537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.876754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.876795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.877014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.877053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.877245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.877323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.877658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.877980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.878022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.878251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.878294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.878507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.878548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.878701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.878742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.878902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.878943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.879134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.879174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.879386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.879627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.879667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.879932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.879973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.880140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.880181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.880349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.880390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.880543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.880592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.880828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.880869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.881028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.881068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.881324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.881366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.881563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.881604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.881799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.881839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.882012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.882052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.882247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.882288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.882594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.882635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.689 [2024-12-09 05:25:16.882920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.689 [2024-12-09 05:25:16.882961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.689 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.883102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.883143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.883444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.883729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.883770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.884051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.884091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.884306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.884348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.884625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.884665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.884835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.884875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.884955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.690 [2024-12-09 05:25:16.885135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.885175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.885317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.885358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.885502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.885542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.885730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.885770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.886007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.886049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.886262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.886303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.886606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.886903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.886944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.887169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.887223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.887433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.887472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.887705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.887746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.887949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.888250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.888291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.888488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.888528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.888787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.888827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.889092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.889396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.889438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.889644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.889683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.889944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.889984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.890204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.890255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.890479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.890519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.890782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.890822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.891028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.891068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.891326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.891375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.891516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.891557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.891747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.891787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.891939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.891980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.892186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.892235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.892429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.892468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.892595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.892636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.690 qpair failed and we were unable to recover it. 00:30:34.690 [2024-12-09 05:25:16.892863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.690 [2024-12-09 05:25:16.892903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.893183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.893234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.893430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.893470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.893778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.893819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.894023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.894064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.894379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.894421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.894682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.894721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.894945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.895142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.895182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.895402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.895443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.895731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.895773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.895991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.896034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.896231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.896273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.896470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.896512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.896704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.896743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.896900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.896939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.897059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.897099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.897330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.897372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.897569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.897608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.897824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.897864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.898004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.898045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.898248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.898290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.898493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.898534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.898731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.898772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.898975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.899015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.899243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.899286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.899496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.899535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.899792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.899832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.900032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.900073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.900319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.900545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.900586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.900852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.900892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.901110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.901150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.901425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.901472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.901612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.901652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.901909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.901949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.902229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.902270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.691 qpair failed and we were unable to recover it. 00:30:34.691 [2024-12-09 05:25:16.902469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.691 [2024-12-09 05:25:16.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.902719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.902759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.903005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.903045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.903256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.903296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.903508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.903548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.903747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.903787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.904053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.904093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.904308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.904350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.904637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.904678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.904816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.905120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.905161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.905429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.905471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.905774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.905814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.906017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.906057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.906286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.906328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.906545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.906585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.906828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.907021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.907062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.907270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.907311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.907569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.907609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.907772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.907812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.908025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.908065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.908281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.908323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.908576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.908616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.908923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.908963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.909266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.909307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.909565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.909605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.909807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.909847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.910164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.910398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.910439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.910736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.910955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.910995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.911255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.911500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.911540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.911751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.911792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.912084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.912306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.912354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.692 [2024-12-09 05:25:16.912636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.692 [2024-12-09 05:25:16.912677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.692 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.912872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.912912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.913106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.913147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.913283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.913325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.913604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.913645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.913855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.913896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.914154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.914194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.914412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.914453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.914648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.914687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.914989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.915029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.915292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.915531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.915571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.915867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.915907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.916173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.916228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.916462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.916503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.916767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.916807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.917080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.917119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.917345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.917387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.917665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.917705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.917917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.917957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.918235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.918277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.918482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.918522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.918732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.918772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.918913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.918954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.919222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.919263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.919521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.919561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.919697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.919737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.919944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.919985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.920266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.920308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.920575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.920618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.920918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.920964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.921116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.921156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.921363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.921404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.921608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.921648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.921932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.921975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.922285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.922330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.922593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.922742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.693 [2024-12-09 05:25:16.922784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.693 qpair failed and we were unable to recover it. 00:30:34.693 [2024-12-09 05:25:16.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.923099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.923147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.923402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.694 [2024-12-09 05:25:16.923395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.923436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.694 [2024-12-09 05:25:16.923446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.694 [2024-12-09 05:25:16.923455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.694 [2024-12-09 05:25:16.923462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.694 [2024-12-09 05:25:16.923455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.923751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.923808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.924026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.924069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.924391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.924594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.924634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.924846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.924886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.925141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.925183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.925343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.925384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 [2024-12-09 05:25:16.925281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.925391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:34.694 [2024-12-09 05:25:16.925500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:34.694 [2024-12-09 05:25:16.925586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.925624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b9[2024-12-09 05:25:16.925501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:34.694 0 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.925917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.925964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.926125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.926166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.926389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.926430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.926694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.926999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.927040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.927321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.927525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.927565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.927798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.927838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.928134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.928173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.928440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.928481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.928694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.928735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.928887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.928927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.929082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.929122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.929335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.929376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.929701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.929741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.929949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.929989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.930195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.930248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.694 [2024-12-09 05:25:16.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.694 [2024-12-09 05:25:16.930525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.694 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.930718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.930758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.930988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.931028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.931277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.931318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.931523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.931563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.931702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.931743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.931976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.932016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.932168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.932217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.932462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.932769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.932809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.933043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.933090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.933393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.933438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.933702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.933742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.934016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.934056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.934315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.934356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.934640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.934680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.934957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.934998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.935290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.935332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.935595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.935634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.935914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.935955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.936163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.936204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.936527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.936571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.936894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.936935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.937139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.937189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.937443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.937484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.937758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.938061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.938102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.938404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.938447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.938707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.938748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.938965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.939006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.939225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.939266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.939526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.939567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.939734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.939776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.940031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.940072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.940222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.940264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.940554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.940596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.940869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.940911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.941172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.695 [2024-12-09 05:25:16.941221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.695 qpair failed and we were unable to recover it. 00:30:34.695 [2024-12-09 05:25:16.941435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.941477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.941786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.941827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.942093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.942307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.942350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.942610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.942650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.942807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.942849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.943053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.943095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.943403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.943445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.943714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.943756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.943986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.944028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.944304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.944346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.944672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.944905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.944964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.945258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.945303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.945626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.945666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.945951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.945992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.946278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.946321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.946609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.946650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.947231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.947274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.947485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.947526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.947734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.947775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.948035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.948076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.948296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.948338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.948548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.948589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.948795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.948846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.949197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.949439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.949482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.949692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.949733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.950016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.950057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.950339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.950383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.950605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.950646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.950940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.950983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.951318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.951374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.951592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.951941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.951981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.952251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.952299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.696 qpair failed and we were unable to recover it. 00:30:34.696 [2024-12-09 05:25:16.952566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.696 [2024-12-09 05:25:16.952608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.952872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.952914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.953185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.953257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.953538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.953580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.953787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.954039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.954079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.954286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.954329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.954613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.954653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.954918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.954957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.955194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.955531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.955572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.955845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.956150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.956189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.956501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.956542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.956748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.956788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.957127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.957492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.957534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.957763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.957804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.958089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.958129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.958421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.958463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.958663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.958704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.959007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.959046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.959324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.959367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.959663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.959703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.960004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.960045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.960347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.960388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.960614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.960655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.960865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.960906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.961186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.961239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.961539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.961580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.961805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.962111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.962391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.962433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.962590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.962631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.962839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.962879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.963083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.963425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.697 qpair failed and we were unable to recover it. 00:30:34.697 [2024-12-09 05:25:16.963727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.697 [2024-12-09 05:25:16.963768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.963968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.964008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.964201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.964250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.964476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.964681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.964721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.964993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.965039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.965306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.965348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.965556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.965596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.965798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.965837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.966094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.966135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.966445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.966489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.966650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.966691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.966902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.966946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.967255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.967302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.967601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.967644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.967855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.967896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.968132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.968173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.968391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.968432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.968706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.968746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.968981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.969022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.969310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.969352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.969546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.969850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.969890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.970099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.970139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.970361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.970402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.970625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.970665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.970947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.970987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.971196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.971260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.971538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.971577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.971859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.971899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.972103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.972143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.972425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.972467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.972732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.972779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.973059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.973099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.973356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.973399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.973680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.973721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.974015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.974056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.974318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.698 [2024-12-09 05:25:16.974360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.698 qpair failed and we were unable to recover it. 00:30:34.698 [2024-12-09 05:25:16.974634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.974675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.974946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.974986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.975188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.975237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.975473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.975513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.975806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.975846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.976127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.976167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.976455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.976496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.976747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.976787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.977032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.977073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.977332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.977374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.977655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.977695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.978004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.978045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.978248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.978290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.978575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.978897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.978938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.979247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.979289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.979576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.979616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.979830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.979870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.980155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.980462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.980502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.980762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.980803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.981000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.981040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.981308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.981629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.981931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.981971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.982253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.982294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.982562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.982603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.982877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.982917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.983136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.983176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.983492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.983534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.983740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.983781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.984058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.984098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.984297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.984339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.699 qpair failed and we were unable to recover it. 00:30:34.699 [2024-12-09 05:25:16.984534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.699 [2024-12-09 05:25:16.984573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.984853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.984893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.985135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.985225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.985537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.985589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.985887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.985939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.986245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.986289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.986573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.986613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.986894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.986935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.987246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.987289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.987550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.987591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.987869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.987908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.988220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.988495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.988535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.988837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.988877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.989155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.989195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.989468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.989510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.989798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.989839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.990066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.990106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.990315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.990358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.990640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.990680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.990960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.991000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.991266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.991309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.991581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.991620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.991889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.991929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.992188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.992253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.992533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.992574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.992853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.992893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.993175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.993224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.993507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.993548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.993756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.993802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.993998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.994038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.994346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.994388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.994587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.994627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.994849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.994889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.995148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.995188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.995471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.995511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.995735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.995775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.995978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.700 [2024-12-09 05:25:16.996020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.700 qpair failed and we were unable to recover it. 00:30:34.700 [2024-12-09 05:25:16.996279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.996321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.996622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.996661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.996870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.996910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.997040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.997079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.997277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.997318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.997680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.997961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.998001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.998263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.998329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.998607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.998648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.998881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.998922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.999226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.999552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.999592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:16.999856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:16.999897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.000145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.000187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.000502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.000544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.000787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.000827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.001048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.001089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.001376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.001418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.001703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.001756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.002047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.002087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.002359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.002401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.002606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.002646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.002910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.002949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.003151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.003192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.003459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.003499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.003769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.003810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.004032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.004071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.004274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.004316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.004619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.004659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.004924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.005198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.005249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.005507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.005547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.005840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.005881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.006104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.006144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.006346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.006388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.006614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.006653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.006936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.701 [2024-12-09 05:25:17.006976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.701 qpair failed and we were unable to recover it. 00:30:34.701 [2024-12-09 05:25:17.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.007239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.007471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.007512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.007808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.007848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.008131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.008171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.008425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.008465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.008753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.008793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.009098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.009138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.009428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.009469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.009715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.009761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.010001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.010042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.010268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.010310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.010544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.010887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.010927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.011155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.011193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.011408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.011447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.011663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.012011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.012049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.012341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.012381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.012699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.012961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.012999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.013286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.013325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.013588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.013626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.013834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.013873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.014109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.014148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.014438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.014479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.014767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.014804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.015081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.015120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.015390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.015431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.015703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.015741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.016015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.016261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.016302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.016560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.016598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.016881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.016920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.017183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.017440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.017479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.017689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.017728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.018033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.018073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.018310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.702 [2024-12-09 05:25:17.018349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.702 qpair failed and we were unable to recover it. 00:30:34.702 [2024-12-09 05:25:17.018650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.018689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.018897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.018936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.019175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.019228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.019533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.019588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.019874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.019914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.020168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.020228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.020497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.020537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.020832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.020873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.021130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.021170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.021393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.021449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.021720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.021761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.022015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.022085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.022391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.022435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.022673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.022713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.022856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.022896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.023173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.023221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.023488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.023528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.023803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.023843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.024114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.024154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.024461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.024503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.024705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.024745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.024951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.024992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.025290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.025571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.025611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.025897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.025937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.026150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.026402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.026697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.026738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.027024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.027064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.027342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.027384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.027663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.027703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.027968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.028007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.028218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.028260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.028495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.028534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.703 qpair failed and we were unable to recover it. 00:30:34.703 [2024-12-09 05:25:17.028806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.703 [2024-12-09 05:25:17.028847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.029119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.029159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.029418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.029459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.029750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.029790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.030053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.030099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.030252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.030293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.030577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.030817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.030857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.031088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.031128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.031433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.031766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.031807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.032070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.032109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.032369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.032411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.032697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.032736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.033014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.033057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.033322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.033363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.033646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.033686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.033811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.033852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.034137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.034177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.034447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.034488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.034757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.034797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.035072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.035112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.035317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.035359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.035617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.035656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.035940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.035980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.036265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.036307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.036569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.036608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.036889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.036929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.037217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.037259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.037542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.037582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.037858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.037898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.038114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.038161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.038383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.038737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.038777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.039056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.039097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.039311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.039352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.039548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.039589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.039734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.039775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.704 qpair failed and we were unable to recover it. 00:30:34.704 [2024-12-09 05:25:17.040067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.704 [2024-12-09 05:25:17.040107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.040370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.040412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.040620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.040660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.040965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.041005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.041221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.041263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.041581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.041792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.041832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.042123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.042175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.042318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.042358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.042638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.042678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.042894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.043237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.043278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.043575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.043615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.043827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.044067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.044107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.044384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.044427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.044711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.044751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.044961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.045001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.045226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.045267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.045573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.045613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.045893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.045939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.046224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.046265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.046569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.046609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.046869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.046909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.047190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.047239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.047468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.047508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.047815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.047854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.048113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.048467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.048508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.048767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.048807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.049089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.049129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.049419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.049461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.049603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.049643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.049842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.049882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.050097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.050137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.050451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.050492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.050739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.050778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.051035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.051075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.705 [2024-12-09 05:25:17.051358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.705 [2024-12-09 05:25:17.051400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.705 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.051659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.051699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.051901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.051940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.052196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.052463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.052504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.052813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.052853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.053054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.053094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.053289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.053331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.053530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.053570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.053850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.053889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.054111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.054151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.054474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.054739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.054779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.055043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.055083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.055361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.055403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.055608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.055648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.055956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.055996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.056279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.056320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.056608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.056649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.056906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.056946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.057138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.057178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.057451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.057493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.057753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.057793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.058102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.058165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.058488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.058535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.058753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.058794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.058998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.059038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.059321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.059365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.059669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.059943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.059984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.060255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.060298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.060570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.060610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.060842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.060882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.061102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.061456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.061498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.061779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.061819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.062033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.062081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.062384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.062425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.706 [2024-12-09 05:25:17.062748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.706 [2024-12-09 05:25:17.062788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.706 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.063073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.063113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.063391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.063435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.063595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.063636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.063913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.063953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.064161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.064201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.064519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.064560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.064820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.064860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.065140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.065181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.065479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.065520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.065754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.065793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.066003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.066044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.066321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.066364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.066623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.066663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.066854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.066894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.067127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.067168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.067423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.067469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.067706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.067747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.067982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.068241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.068283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.068546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.068587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.068888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.068928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.069145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.069185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.069395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.069436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.069697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.069737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.069955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.069998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.070263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.070305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.070577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.070618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.070943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.071218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.071270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.071538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.071578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.071788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.071828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.072030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.072070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.072317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.072364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.072668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.072996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.073036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.073327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.073368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.073578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.707 [2024-12-09 05:25:17.073618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.707 qpair failed and we were unable to recover it. 00:30:34.707 [2024-12-09 05:25:17.073844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.073883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.074190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.074240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.074523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.074563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.074830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.074870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.075147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.075187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.075458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.075499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.075771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.075811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.076101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.076141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.076434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.076476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.076754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.076793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.077009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.077049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.077330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.077372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.077651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.077691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.077899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.077940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.078188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.078482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.078527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.078850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.079135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.079174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.079471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.079513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.079816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.079855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.080150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.080190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.080496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.080536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.080750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.080789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.081071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.081110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.081401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.081444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.081725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.081765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.082048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.082087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.082367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.082716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.082756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.083040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.083080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.083332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.083525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.083565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.083909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.708 [2024-12-09 05:25:17.084185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.708 [2024-12-09 05:25:17.084246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.708 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.084485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.084851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.085060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.085101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.085304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.085346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.085540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.085579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.085837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.085878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.086148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.086188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.086457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.086498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.086769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.086809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.087092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.087132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.087423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.087464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.087726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.087767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.088045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.088085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.088350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.088391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.088670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.088710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.088924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.088963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.089168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.089220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.089448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.089489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.089805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.089844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.090050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.090090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.090422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.090706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.090746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.091063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.091330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.091372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.091648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.091688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.091953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.091993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.092274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.092315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.092542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.092882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.092921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.093205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.093256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.093496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.093536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.093824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.093863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.094193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.094499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.094546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.094827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.094867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.095068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.095108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.095393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.709 [2024-12-09 05:25:17.095434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.709 qpair failed and we were unable to recover it. 00:30:34.709 [2024-12-09 05:25:17.095693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.095733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.096014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.096055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.096314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.096619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.096659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.096965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.097005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.097290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.097331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.097616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.097657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.097936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.097975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.098137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.098177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.098468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.098508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.098712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.098752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.098983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.099023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.099323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.099608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.099648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.099879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.099919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.100181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.100239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.100450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.100491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.100698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.100737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.101019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.101058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.101319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.101361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.101640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.101681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.101959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.101999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.102269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.102326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.102615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.102655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.102858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.102898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.103183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.103235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.103477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.103518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.103803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.103843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.104107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.104147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.104452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.104495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.104779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.104819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.105070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.105109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.105395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.105436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.105724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.105764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.105998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.106038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.106250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.106291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.710 [2024-12-09 05:25:17.106521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.710 [2024-12-09 05:25:17.106567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.710 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.106873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.106913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.107110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.107149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.107362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.107402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.107605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.107645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.107947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.108268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.108548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.108588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.108866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.108905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.109219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.109487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.109528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.109808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.109848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.110090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.110129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.110433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.110474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.110764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.110804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.111013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.111053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.111312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.111354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.111656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.111695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.111972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.112011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.112283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.112325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.112585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.112806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.112846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.113152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.113193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.113470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.113759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.113798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.114080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.114120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.114332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.114373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.114686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.114726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.115051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.115091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.115309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.115351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.115625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.115665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.115867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.115907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.116170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.116228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.116484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.116524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.116809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.116849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.117131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.117170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.117447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.117500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.117673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.117714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.711 [2024-12-09 05:25:17.117915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.711 [2024-12-09 05:25:17.117955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.711 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.118094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.118135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.118427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.118477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.118706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.118746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.118971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.119011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.119239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.119281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.119489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.119793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.119835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.120110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.120150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.120438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.120479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.120690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.120933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.121230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.121271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.121553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.121593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.121893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.121933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.122170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.122217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.122428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.122469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.122739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.122779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.123028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.123068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.123301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.123344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.123647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.123980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.124021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.124307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.124349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.124614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.124878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.124918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.125199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.125249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.125515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.125555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.125861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.126108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.126148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.126443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.126491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.126767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.126807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.127085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.127126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.127397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.127439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.127712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.127752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.127952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.127993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.128250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.128509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.128549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.128781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.712 qpair failed and we were unable to recover it. 00:30:34.712 [2024-12-09 05:25:17.129127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.712 [2024-12-09 05:25:17.129167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.129410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.129460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.129749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.129790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.130051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.130091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.130372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.130415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.130646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.130686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.130993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.131034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.131314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.131357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.131644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.131683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.131943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.131983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.132270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.132312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.132525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.132565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.132763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.132803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.133104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.133145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.133388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.133431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.133724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.133764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.134006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.134046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.134196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.134248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.134530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.134577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.134817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.134857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.135091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.135131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.135388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.135431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.135711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.135751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.135901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.135942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.136228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.136270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.136485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.136525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.136748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.136788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.137004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.137045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.137321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.137362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.137632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.137673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.137884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.137924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.138089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.138128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.713 qpair failed and we were unable to recover it. 00:30:34.713 [2024-12-09 05:25:17.138363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.713 [2024-12-09 05:25:17.138404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.138710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.138752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.139073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.139113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.139396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.139439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.139659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.139699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.139911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.139950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.140217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.140536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.140577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.140817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.140857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.141062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.141103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.141307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.141348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.141655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.141699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.141979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.142020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.142310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.142374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.142717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.143041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.143322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.143364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.143653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.143894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.143934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.144237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.144280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.144564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.144604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.144935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.145187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.145444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.145485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.145769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.145809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.146062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.146103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.146391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.987 [2024-12-09 05:25:17.146439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.987 qpair failed and we were unable to recover it. 00:30:34.987 [2024-12-09 05:25:17.146721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.146762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.147042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.147082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.147344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.147386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.147705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.147992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.148033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.148319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.148613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.148653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.148937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.148977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.149268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.149309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.149590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.149630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.149914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.149954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.150263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.150535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.150576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.150785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.150826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.151132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.151172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.151469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.151753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.151792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.152080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.152119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.152382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.152425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.152706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.152746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.153005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.153045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.153328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.153369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.153591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.153632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.153937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.153977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.154244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.154285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.154510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.154550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.154785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.155028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.155069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.155374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.155416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.155638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.155679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.155937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.155977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.156126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.156166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.156481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.156524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.156815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.156855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.157137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.157178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.157404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.157444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.988 [2024-12-09 05:25:17.157709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.988 [2024-12-09 05:25:17.157749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.988 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.158004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.158044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.158331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.158373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.158663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.158709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.158993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.159033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.159314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.159356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.159639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.159679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.159963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.160003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.160282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.160323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.160583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.160623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.160855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.160894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.161175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.161225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.161437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.161477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.161755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.161795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.162068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.162108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.162389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.162431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.162729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.162769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.163070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.163111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.163411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.163747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.163787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.164089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.164129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.164369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.164632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.164673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.164877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.165137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.165176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.165447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.165487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.165691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.165731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.165862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.165901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.166176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.166227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.166429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.166469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.166793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.166846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.167148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.167189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.167448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.167489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.167772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.167812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.168056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.168096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.168407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.168452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.168783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.168823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.169100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.169140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.989 qpair failed and we were unable to recover it. 00:30:34.989 [2024-12-09 05:25:17.169408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.989 [2024-12-09 05:25:17.169451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.169721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.169760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.170053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.170094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.170382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.170423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.170700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.170740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.171002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.171051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.171264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.171306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.171566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.171606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.171906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.171946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.172206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.172257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.172462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.172502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.172785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.172825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.173032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.173074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.173359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.173400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.173612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.173651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.173872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.173913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.174189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.174236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.174453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.174493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.174712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.174753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.175040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.175081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.175361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.175664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.175704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.175988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.176325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.176366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.176605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.176644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.176984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.177243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.177283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.177491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.177531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.177803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.177843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.178098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.178138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.178364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.178708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.178748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.179082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.179135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.179439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.179483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.179744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.179783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.179988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.180028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.180308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.180349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.990 [2024-12-09 05:25:17.180652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.990 qpair failed and we were unable to recover it. 00:30:34.990 [2024-12-09 05:25:17.180858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.180897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.181197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.181245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.181519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.181559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.181853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.181892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.182162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.182201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.182475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.182787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.182826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.183100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.183147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.183364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.183406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.183711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.183751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.184019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.184060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.184337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.184379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.184678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.184717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.184938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.184978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.185195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.185246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.185530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.185570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.185782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.185822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.186104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.186144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.186428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.186469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.186674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.186714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.186996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.187037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.187310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.187351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.187622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.187661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.187932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.187971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.188112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.188152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.188424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.188465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.188717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.188757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.189036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.189075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.189348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.189391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.189658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.189698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.189903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.189942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.991 [2024-12-09 05:25:17.190246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.991 [2024-12-09 05:25:17.190288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.991 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.190566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.190606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.190813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.190853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.191091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.191139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.191438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.191481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.191739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.191778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.192039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.192079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.192378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.192420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.192622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.192663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.192919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.192960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.193226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.193266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.193558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.193599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.193834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.193874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.194179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.194491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.194531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.194675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.194715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.194970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.195010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.195282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.195324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.195537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.195578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.195789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.195829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.196136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.196176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.196504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.196546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.196778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.196818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.197055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.197095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.197341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.197385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.197677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.197953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.197993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.198297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.198338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.198590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.198878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.198917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.199202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.199256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.199557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.199598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.199729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.199768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.200025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.200065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.200369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.200411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.200691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.200731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.201015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.201055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.201300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.201341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.992 [2024-12-09 05:25:17.201639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.992 [2024-12-09 05:25:17.201679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.992 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.201955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.202258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.202299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.202444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.202743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.202783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.202983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.203023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.203313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.203640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.203680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.203888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.203927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.204121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.204161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.204477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.204518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.204799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.204839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.205127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.205167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.205487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.205530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.205829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.205876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.206141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.206182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.206514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.206713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.206753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.207067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.207108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.207451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.207730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.207770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.208054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.208093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.208415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.208698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.208738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.209020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.209060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.209264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.209307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.209498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.209538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.209794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.209834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.210108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.210148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.210448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.210489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.210698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.210737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.211018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.211058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.211392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.211550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.211591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.211875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.211915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.212158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.212440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.212483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.212749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.993 [2024-12-09 05:25:17.212788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.993 qpair failed and we were unable to recover it. 00:30:34.993 [2024-12-09 05:25:17.213077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.213117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.213410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.213452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.213664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.213704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.213983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.214024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.214279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.214561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.214601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.214879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.214920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.215219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.215261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.215478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.215518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.215727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.215768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.216029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.216070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.216358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.216401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.216683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.216724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.217029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.217069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.217344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.217386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.217592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.217633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.217906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.217946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.218224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.218266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.218550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.218591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.218853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.218894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.219171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.219219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.219521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.219562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.219794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.219834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.220070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.220110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.220412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.220455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.220674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.220713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.220911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.220952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.221163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.221203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.221530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.221571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.221831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.221872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.222151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.222192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.222489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.222530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.222807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.222848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.222998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.223039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.223269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.223525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.223565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.223858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.223899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.224095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.994 [2024-12-09 05:25:17.224135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.994 qpair failed and we were unable to recover it. 00:30:34.994 [2024-12-09 05:25:17.224455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.224497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.224756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.224796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.225075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.225116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.225417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.225460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.225740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.226079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.226404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.226688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.226728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.227012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.227053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.227254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.227297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.227588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.227629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.227911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.227951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.228243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.228289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.228570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.228611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.228804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.228844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.229098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.229139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.229425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.229467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.229741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.229781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.230048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.230088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.230350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.230660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.230700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.230964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.231005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.231249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.231567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.231608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.231905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.231947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.232181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.232245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.232480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.232520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.232805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.232845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.233107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.233147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.233478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.233520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.233781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.233821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.234097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.234137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.234411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.234453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.234724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.234764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.235039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.235080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.235350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.235392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.235647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.235694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.995 [2024-12-09 05:25:17.235976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.995 [2024-12-09 05:25:17.236017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.995 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.236267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.236310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.236604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.236644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.236924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.236964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.237227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.237268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.237480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.237520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.237753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.237793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.238090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.238130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.238438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.238480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.238740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.238780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.238972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.239012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.239281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.239324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.239601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.239641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.239933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.239973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.240236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.240281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.240554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.240594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.240878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.240918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.241206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.241265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.241524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.241565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.241823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.241864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.242022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.242062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.242350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.242391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.242549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.242590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.242860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.242900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.243171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.243220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.243380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.243422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.243680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.243985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.244025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.244235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.244281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.244535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.244576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.244879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.244919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.245198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.245250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.245404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.245445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.245670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.245710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.996 [2024-12-09 05:25:17.245993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.996 [2024-12-09 05:25:17.246032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.996 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.246294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.246336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.246616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.246656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.246927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.246967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.247229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.247556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.247603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.247881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.247921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.248181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.248238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.248444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.248485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.248750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.248789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.249094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.249371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.249413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.249676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.249717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.249956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.249996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.250281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.250323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.250517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.250557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.250843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.250885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.251114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.251154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.251379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.251420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.251631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.251672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.251931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.251972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.252253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.252295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.252558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.252597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.252874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.252914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.253197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.253244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.253507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.253547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.253701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.253742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.254030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.254070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.254347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.254388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.254672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.254712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.255036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.255361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.255676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.255717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.255939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.255980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.256283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.256325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.256623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.256907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.256947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.257252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.257293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.997 [2024-12-09 05:25:17.257525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.997 [2024-12-09 05:25:17.257566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.997 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.257841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.257881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.258195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.258265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.258553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.258594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.258790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.258830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.259111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.259151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.259446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.259488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.259786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.259833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.260128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.260168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.260238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9ef20 (9): Bad file descriptor 00:30:34.998 [2024-12-09 05:25:17.260709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.260789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.261099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.261143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.261462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.261507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.261718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.261759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.261971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.262270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.262312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.262545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.262585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.262794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.263119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.263160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.263407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.263448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.263743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.263783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.264063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.264112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.264496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.264718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.264758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.264961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.265001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.265308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.265350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.265664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.265935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.265975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.266250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.266292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.266585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.266625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.266907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.267231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.267273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.267552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.267592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.267872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.267912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.268126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.268166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.268483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.268538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.268830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.268871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.269152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.998 [2024-12-09 05:25:17.269192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.998 qpair failed and we were unable to recover it. 00:30:34.998 [2024-12-09 05:25:17.269510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.269550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.269831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.269872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.270156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.270195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.270509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.270550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.270905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.271163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.271202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.271439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.271479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.271782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.271822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.272104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.272144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.272437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.272479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.272781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.272822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.273119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.273159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.273464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.273506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.273721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.273761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.274040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.274080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.274351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.274393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.274673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.274712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.274993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.275033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.275298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.275339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.275549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.275589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.275848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.275887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.276124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.276164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.276379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.276419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.276697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.276743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.276947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.276987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.277194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.277450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.277490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.277690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.277729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.278034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.278075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.278226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.278267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.278532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.278572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.278767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.278807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.279038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.279077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.279339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.279381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.279661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.279701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.280003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.280043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.280327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.280368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.280655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:34.999 [2024-12-09 05:25:17.280695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:34.999 qpair failed and we were unable to recover it. 00:30:34.999 [2024-12-09 05:25:17.280887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.280927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.281153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.281486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.281527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.281724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.281764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.282050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.282090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.282364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.282404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.282670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.282711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.282991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.283031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.283292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.283334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.283527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.283800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.283840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.284057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.284097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.284366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.284407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.284650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.284690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.284945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.284985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.285273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.285314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.285618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.285658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.285967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.286006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.286291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.286336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.286504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.286793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.287007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.287047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.287274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.287315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.287524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.287564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.287845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.287885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.288138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.288184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.288459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.288499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.288717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.288758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.289030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.289070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.289372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.289413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.289669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.289709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.289973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.290013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.290380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.290590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.290631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.290831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.290871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.291172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.291223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.291416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.291457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.291669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.000 [2024-12-09 05:25:17.291708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.000 qpair failed and we were unable to recover it. 00:30:35.000 [2024-12-09 05:25:17.292030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.292070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.292336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.292378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.292652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.292855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.293034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.293073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.293337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.293378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.293660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.293899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.293939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.294161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.294201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.294535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.294575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.294841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.294880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.295156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.295195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.295507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.295548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.295780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.295820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.296131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.296171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.296467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.296508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.296739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.296779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.297009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.297048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.297265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.297307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.297580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.297620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.297891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.297931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.298252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.298460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.298500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.298786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.298827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.299019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.299058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.299348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.299389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.299536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.299576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.299785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.299831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.300093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.300133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.300346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.300387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.300591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.300631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.300851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.301193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.301246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.301525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.001 [2024-12-09 05:25:17.301564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.001 qpair failed and we were unable to recover it. 00:30:35.001 [2024-12-09 05:25:17.301875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.301914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.302253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.302483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.302523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.302850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.303135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.303174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.303397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.303437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.303743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.303783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.304135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.304174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.304448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.304489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.304759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.304799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.305117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.305156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.305394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.305436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.305696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.305736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.305959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.305997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.306301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.306343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.306499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.306539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.306797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.306836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.307070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.307110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.307334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.307376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.307662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.307701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.308007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.308048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.308284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.308325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.308546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.308585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.308858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.308898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.309091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.309131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.309460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.309501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.309712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.309752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.310054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.310094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.310344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.310386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.310624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.310663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.310883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.310923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.311130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.311170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.311402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.311468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.311752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.311801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.312085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.312126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.312363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.312406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.312727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.312767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.313090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.002 [2024-12-09 05:25:17.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.002 qpair failed and we were unable to recover it. 00:30:35.002 [2024-12-09 05:25:17.313425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.313467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.313704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.313744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.314017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.314057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.314265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.314308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.314654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.314948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.314989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.315272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.315314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.315552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.315592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.315895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.315935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.316228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.316270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.316568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.316608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.316909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.316949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.317177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.317225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.317456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.317497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.317797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.317838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.318037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.318077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.318387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.318429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.318641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.318682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.319003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.319043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.319269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.319310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.319533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.319574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.319831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.319871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.320137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.320186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.320432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.320473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.320634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.320674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.320885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.320925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.321184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.321234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.321428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.321468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.321744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.321930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.321971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.322253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.322295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.322552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.322840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.323120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.323160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.323421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.323463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.323681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.323721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.323948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.323989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.324194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.324243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.003 qpair failed and we were unable to recover it. 00:30:35.003 [2024-12-09 05:25:17.324526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.003 [2024-12-09 05:25:17.324566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.324850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.324890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.325150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.325190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.325530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.325770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.325810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.326112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.326152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.326455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.326497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.326779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.326819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.327031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.327072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.327354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.327395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.327683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.327723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.328006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.328046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.328245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.328287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.328592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.328632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.328842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.328882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.329154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.329193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.329507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.329548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.329829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.329869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.330166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.330505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.330546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.330848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.330889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.331146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.331465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.331506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.331794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.331835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.332116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.332155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.332504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.332549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.332846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.332885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.333182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.333231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.333527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.333567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.333834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.333874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.334168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.334221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.334455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.334495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.334699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.334740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.334996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.335036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.335268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.335310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.335520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.335560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.335872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.335912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.336235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.336276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.004 [2024-12-09 05:25:17.336556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.004 [2024-12-09 05:25:17.336603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.004 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.336864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.336903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.337132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.337171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.337380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.337421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.337718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.338004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.338043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.338302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.338344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.338628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.338668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.338951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.338991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.339274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.339316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.339529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.339569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.339852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.339892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.340151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.340190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.340477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.340517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.340787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.340828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.341055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.341095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.341353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.341394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.341676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.341976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.342015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.342295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.342336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.342572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.342612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.342869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.342909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.343136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.343176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.343413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.343454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.343662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.343702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.343986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.344026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.344289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.344331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.344605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.344986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.345031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.345345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.345390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.345605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.345646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.345905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.345946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.346251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.346292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.346572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.346612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.005 [2024-12-09 05:25:17.346825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.005 [2024-12-09 05:25:17.346865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.005 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.347013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.347264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.347306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.347588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.347628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.347912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.348220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.348261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.348555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.348595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.348873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.348913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.349122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.349162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.349369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.349410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.349668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.349708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.349997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.350036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.350336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.350377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.350669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.350709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.350919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.350959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.351184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.351233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.351434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.351474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.351708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.351748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.351941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.351981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.352260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.352301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.352577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.352618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.352879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.352918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.353147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.353186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.353505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.353546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.353828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.353867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.354097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.354138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.354446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.354488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.354811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.355075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.355115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.355391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.355433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.355701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.355741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.355930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.356133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.356173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.356465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.356512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.356785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.356825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.357102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.357142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.357369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.357411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.357759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.006 [2024-12-09 05:25:17.357904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.006 [2024-12-09 05:25:17.357943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.006 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.358226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.358267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.358518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.358741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.358781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.359087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.359127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.359375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.359419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.359703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.359742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.360001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.360041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.360236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.360278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.360546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.360587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.360864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.360904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.361377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.361705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.361745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.362024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.362330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.362601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.362641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.362924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.362963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.363231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.363272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.363467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.363507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.363815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.363855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.364112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.364152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.364308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.364350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.364556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.364595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.364822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.364862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.365117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.365157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.365381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.365423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.365666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.365809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.365849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.366067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.366107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.366305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.366347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.366603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.366948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.366988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.367223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.367265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.367527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.367567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.367841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.367887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.368171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.368219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.368463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.007 [2024-12-09 05:25:17.368747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.007 [2024-12-09 05:25:17.368787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.007 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.369071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.369111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.369397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.369438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.369725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.369765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.369913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.369953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.370233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.370275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.370541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.370581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.370783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.370822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.371127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.371167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.371453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.371493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.371778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.371818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.372104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.372145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.372488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.372530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.372814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.372853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.373046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.373086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.373353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.373395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.373694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.373979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.374019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.374301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.374343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.374605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.374644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.374927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.374967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.375199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.375247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.375534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.375573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.375712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.375752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.376063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.376103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.376375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.376416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.376718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.376758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.377036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.377077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.377337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.377545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.377586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.377868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.378170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.378221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.378488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.378528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.378798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.378838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.379129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.379170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.379503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.379556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.379874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.379916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.008 [2024-12-09 05:25:17.380177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.008 [2024-12-09 05:25:17.380239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.008 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.380449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.380489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.380715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.380755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.381060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.381100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.381338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.381381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.381664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.381704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.381986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.382025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.382240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.382282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.382557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.382598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.382815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.382855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.383160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.383505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.383547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.383829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.383870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.384080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.384120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.384443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.384486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.384727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.384767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.384979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.385020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.385244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.385290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.385520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.385560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.385865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.385906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.386199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.386249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.386510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.386550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.386775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.386815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.387022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.387062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.387331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.387372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.387669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.387710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.388042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.388081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.388370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.388411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.388719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.388759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.388967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.389007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.389223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.389275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.389510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.389550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.389783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.389823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.390082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.390122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.390405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.390447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.390706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.390746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.391030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.391070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.391346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.391388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.009 [2024-12-09 05:25:17.391646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.009 [2024-12-09 05:25:17.391686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.009 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.391971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.392011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.392274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.392322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.392633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.392848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.392888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.393091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.393132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.393367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.393409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.393631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.393671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.393979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.394019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.394344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.394634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.394673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.394826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.394866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.395125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.395165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.395538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.395698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.395941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.395980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.396273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.396316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.396548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.396587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.396890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.396929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.397222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.397263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.397541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.397582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.397861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.397900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.398162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.398202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.398487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.398528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.398801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.398840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.399117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.399157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.399434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.399477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.399724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.400014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.400235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.400278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.400589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.400628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.400793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.400833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.401068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.401109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.401389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.401430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.010 qpair failed and we were unable to recover it. 00:30:35.010 [2024-12-09 05:25:17.401678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.010 [2024-12-09 05:25:17.401718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.402031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.402072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.402369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.402410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.402613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.402652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.402861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.402900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.403159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.403199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.403528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.403788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.403829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.403988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.404039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.404325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.404367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.404633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.404673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.404990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.405030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.405258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.405299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.405609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.405936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.405975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.406262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.406303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.406586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.406626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.406836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.406875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.407103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.407143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.407426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.407467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.407731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.407770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.408034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.408234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.408277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.408505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.408545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.408784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.408824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.409025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.409066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.409352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.409393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.409548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.409588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.409796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.410098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.410137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.410363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.410404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.410606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.410646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.410853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.410893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.411200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.411250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.411511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.411550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.411843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.411883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.412125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.412460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.412501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.011 qpair failed and we were unable to recover it. 00:30:35.011 [2024-12-09 05:25:17.412763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.011 [2024-12-09 05:25:17.412802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.412947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.412987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.413222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.413263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.413572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.413612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.413778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.413818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.414047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.414087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.414364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.414406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.414601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.414641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.414922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.414961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.415177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.415227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.415395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.415441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.415675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.415714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.415930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.415969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.416253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.416294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.416556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.416596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.416807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.416846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.417080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.417120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.417427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.417469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.417677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.417717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.417947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.418195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.418267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.418467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.418507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.418705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.418745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.418948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.419255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.419505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.419545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.419682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.419722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.419998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.420038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.420321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.420362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.420603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.420863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.420903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.421180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.421488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.421527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.421746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.421786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.422033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.422073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.422347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.422387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.422546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.422586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.422805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.012 [2024-12-09 05:25:17.422846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.012 qpair failed and we were unable to recover it. 00:30:35.012 [2024-12-09 05:25:17.423128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.423167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.423430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.423491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.423788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.423829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.424122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.424163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.424472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.424515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.424726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.424993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.425033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.425338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.425380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.425539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.425580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.425837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.425877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.426112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.426152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.426481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.426522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.426731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.426777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.426980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.427020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.427328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.427370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.427597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.427636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.427856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.427896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.428177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.428227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.428512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.428553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.428842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.428883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.429190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.429242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.429467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.429507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.429792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.429833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.430048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.430373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.430414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.430655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.430694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.431026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.431067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.431343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.431384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.431599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.431638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.431900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.431940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.432229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.432270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.432484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.432523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.432746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.432788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.433046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.433086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.433376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.433417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.433631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.433671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.433895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.433935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.434194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.013 [2024-12-09 05:25:17.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.013 qpair failed and we were unable to recover it. 00:30:35.013 [2024-12-09 05:25:17.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.434530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.434776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.434826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.435113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.435154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.435320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.435362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.435616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.435656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.435895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.435935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.436082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.436619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.436660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.436857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.436897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.437183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.437233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.437404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.437444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.437706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.437745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.437988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.438029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.438289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.438337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.438504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.438545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.438749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.438789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.439019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.439059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.439397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.439610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.439649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.440019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.440058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.440326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.440368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.440631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.440671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.440900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.440940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.014 [2024-12-09 05:25:17.441195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.014 [2024-12-09 05:25:17.441248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.014 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.441413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.441454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.441662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.441921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.441962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.442196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.442247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.442377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.442417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.291 [2024-12-09 05:25:17.442701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.291 [2024-12-09 05:25:17.442741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.291 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.443000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.443039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.443274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.443315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.443534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.443574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.443735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.443775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.443930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.443970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.444182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.444235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.444390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.444430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.444709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.444750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.444914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.444953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.445133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.445183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.445413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.445669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.445711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.445856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.445897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.446091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.446289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.446331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.446592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.446632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.446835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.446876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.447019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.447060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.447320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.447559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.447599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.447734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.447774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.447996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.448036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.448241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.448290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.448495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.448535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.448738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.448778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.448988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.449028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.449234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.449274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.449562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.449602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.449809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.449849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.449996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.450035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.450257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.450694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.450733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.450992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.451032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.451238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.451281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.451431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.451471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.451622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.292 [2024-12-09 05:25:17.451663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.292 qpair failed and we were unable to recover it. 00:30:35.292 [2024-12-09 05:25:17.451944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.451985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.452123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.452162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.452400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.452444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.452597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.452637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.452850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.453097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.453137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.453279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.453321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.453519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.453660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.453700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.453926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.453966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.454174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.454226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.454486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.454526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.454657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.454701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.455021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.455227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.455269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.455462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.455502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.455715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.455755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.455904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.455944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.456099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.456143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.456305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.456347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.456489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.456529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.456668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.456708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.456833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.456873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.457007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.457047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.457328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.457369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.457612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.457652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.457818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.458009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.458049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.458261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.458301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.458575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.458615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.458915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.459227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.459268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.459473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.459513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.459712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.459753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.460040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.460080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.460296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.460338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.460544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.460585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.293 qpair failed and we were unable to recover it. 00:30:35.293 [2024-12-09 05:25:17.460733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.293 [2024-12-09 05:25:17.460772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.460976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.461165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.461205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.461386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.461426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.461672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.461712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.462020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.462060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.462255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.462542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.462583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.462808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.462849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.463082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.463122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.463336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.463377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.463569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.463610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.463750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.463790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.463926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.463965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.464106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.464146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.464470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.464520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.464786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.464827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.465022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.465061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.465260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.465301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.465555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.465813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.465852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.466067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.466107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.466259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.466300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.466582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.466623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.466813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.466854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.467003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.467043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.467257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.467495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.467536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.467680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.467719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.467946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.467986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.468310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.468521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.468561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.468821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.468861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.469082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.469121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.469331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.469373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.469593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.469638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.469810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.469871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.470150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.470201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.294 qpair failed and we were unable to recover it. 00:30:35.294 [2024-12-09 05:25:17.470513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.294 [2024-12-09 05:25:17.470561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.470821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.471042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.471086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.471289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.471333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.471597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.471642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.471908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.471949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.472162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.472203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.472425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.472467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.472667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.472708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.472919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.472960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.473110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.473150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.473357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.473401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.473663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.473704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.473909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.473950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.474099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.474139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.474415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.474462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.474676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.474718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.474928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.474978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.475135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.475183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.475422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.475753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.475794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.475996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.476037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.476345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.476392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.476591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.476632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.476914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.476954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.477153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.477193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.477367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.477408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.477566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.477605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.477749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.477789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.477984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.478024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.478177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.478227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.478429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.478470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.478626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.478667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.478821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.478868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.479164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.479224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.479451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.479497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.479642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.479682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.479847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.479890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.295 [2024-12-09 05:25:17.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.295 [2024-12-09 05:25:17.480084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.295 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.480242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.480287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.480551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.480594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.480860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.480903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.481036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.481076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.481365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.481409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.481611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.481659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.481795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.481835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.482035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.482076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.482279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.482321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.482520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.482561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.482780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.482825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.483035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.483076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.483337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.483380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.483597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.483636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.483842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.483885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.484099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.484140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.484469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.484510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.484711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.484751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.485010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.485051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.485221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.485270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.485429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.485469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.485750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.485791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.485983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.486023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.486231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.486272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.486410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.486451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.486645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.486686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.486990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.487031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.487176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.487231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.487470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.487510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.487686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.487919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.487958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.488109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.296 [2024-12-09 05:25:17.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.296 qpair failed and we were unable to recover it. 00:30:35.296 [2024-12-09 05:25:17.488441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.488489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.488691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.488731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.488859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.488899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.489171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.489233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.489522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.489563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.489823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.489862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.490150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.490191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.490410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.490614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.490653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.490857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.490898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.491048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.491091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.491319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.491362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.491560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.491600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.491797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.491839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.491998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.492038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.492249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.492291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.492421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.492461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.492624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.492665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.492820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.492861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.493144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.493188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.493346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.493386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.493553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.493597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.493794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.493835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.493965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.494005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.494221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.494264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.494550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.494592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.494785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.494825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.495139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.495180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.495498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.495539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.495733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.495774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.495921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.495961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.496172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.496223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.496486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.496526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.496682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.496724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.496865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.496910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.497163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.497231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.497435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.497485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.497702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.297 [2024-12-09 05:25:17.497745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.297 qpair failed and we were unable to recover it. 00:30:35.297 [2024-12-09 05:25:17.497967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.498014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.498276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.498320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.498613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.498817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.498874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.499178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.499234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.499472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.499516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.499665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.499706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.499964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.500009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.500251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.500299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.500518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.500561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.500790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.500838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.500982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.501254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.501298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.501503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.501544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.501823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.501864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.501988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.502028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.502228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.502270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.502492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.502761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.502803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.503000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.503043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.503251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.503293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.503561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.503602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.503800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.503841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.503984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.504261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.504303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.504513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.504553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.504752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.504795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.504938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.504979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.505122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.505166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.505426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.505476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.505776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.505838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.505999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.506044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.506308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.506350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.506564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.506605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.506745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.506787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.506924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.506963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.507264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.507305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.298 [2024-12-09 05:25:17.507428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.298 [2024-12-09 05:25:17.507476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.298 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.507681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.507728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.507888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.507928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.508144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.508190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.508456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.508677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.508719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.508927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.508982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.509279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.509325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.509456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.509497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.509692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.509736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.509952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.509992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.510177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.510415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.510456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.510750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.510790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.511062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.511102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.511251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.511300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.511432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.511472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.511597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.511636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.511847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.511887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.512077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.512118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.512340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.512382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.512593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.512635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.512851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.512892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.513041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.513081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.513284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.513326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.513519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.513680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.513726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.514230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.514272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.514409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.514450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.514589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.514629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.514824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.514864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.515006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.515048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.515374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.515601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.515641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.515836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.515876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.516078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.516117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.516279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.516322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.516603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.299 [2024-12-09 05:25:17.516644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.299 qpair failed and we were unable to recover it. 00:30:35.299 [2024-12-09 05:25:17.516860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.516900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.517033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.517073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.517268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.517309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.517542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.517582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.517855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.517895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.518185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.518248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.518394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.518434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.518663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.518704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.518940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.519080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.519120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.519291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.519333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.519483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.519523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.519719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.520268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.520457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.520497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.520709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.520749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.520998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.521282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.521323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.521607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.521647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.521784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.521823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.522093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.522140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.522364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.522665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.522705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.522851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.522890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.523121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.523160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.523374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.523415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.523676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.523716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.523973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.524013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.524220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.524260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.524544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.524584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.524863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.524903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.525131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.525170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.525334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.525395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.525630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.525940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.525981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.300 qpair failed and we were unable to recover it. 00:30:35.300 [2024-12-09 05:25:17.526189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.300 [2024-12-09 05:25:17.526246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.526458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.526499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.526659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.526698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.526831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.526871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.527071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.527111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.527334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.527379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.527617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.527942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.527983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.528126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.528167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.528358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.528654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.528701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.528920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.528963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.529115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.529157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.529320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.529363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.529577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.529617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.529859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.529993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.530034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.530290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.530537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.530578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.530770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.530810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.531001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.531041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.531238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.531282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.531545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.531585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.531790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.531831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.532097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.532137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.532654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.532694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.532892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.532931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.533190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.533240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.533440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.533480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.533615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.533655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.533893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.533934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.534237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.534279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.534468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.534765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.534807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.301 [2024-12-09 05:25:17.534958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.301 [2024-12-09 05:25:17.534998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.301 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.535257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.535299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.535517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.535558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.535765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.535805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.536030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.536074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.536394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.536437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.536683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.536730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.536944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.536985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.537226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.537268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.537481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.537521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.537716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.537757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.537914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.537955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.538149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.538190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.538394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.538437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.538654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.538695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.538961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.539002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.539224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.539265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.539525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.539574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.539703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.539743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.539932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.539973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.540129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.540169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.540325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.540368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.540611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.540656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.540797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.540838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.541076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.541254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.541297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.541490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.541530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.541755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.541795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.541988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.542028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.542239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.542289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.542497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.542539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.542689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.542730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.542978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.543183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.543237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.543483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.543524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.543722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.543763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.543966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.544005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.544152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.544193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.544411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.302 [2024-12-09 05:25:17.544452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.302 qpair failed and we were unable to recover it. 00:30:35.302 [2024-12-09 05:25:17.544664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.544706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.544906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.544947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.545158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.545197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.545353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.545394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.545532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.545763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.545809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.546074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.546114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.546312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.546562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.546602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.546905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.546946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.547139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.547179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.547327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.547367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.547573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.547614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.547830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.547870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.548076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.548116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.548252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.548294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.548574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.548615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.548830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.548870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.549090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.549134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.549379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.549422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.549701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.549841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.549881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.550079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.550119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.550261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.550303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.550498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.550538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.550687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.550727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.551090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.551360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.551404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.551545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.551584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.551727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.551768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.551979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.552027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.552230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.552271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.552497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.552538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.552829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.552870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.553065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.553350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.553494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.553534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.303 [2024-12-09 05:25:17.553745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.303 [2024-12-09 05:25:17.553785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.303 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.553927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.554178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.554237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.554465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.554506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.554671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.554711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.554850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.554891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.555126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.555169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.555347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.555389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.555524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.555571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.555878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.555917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.556120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.556160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.556464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.556603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.556643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.556793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.556972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.557019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.557235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.557276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.557478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.557518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.557714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.557754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.557945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.557985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.558269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.558311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.558592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.558633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.558830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.558870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.559095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.559136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.559305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.559347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.559566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.559605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.559834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.559874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.559999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.560040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.560251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.560293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.560542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.560691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.560732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.560892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.560931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.561136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.561176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.561356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.561397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.561605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.561645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.561790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.561830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.561967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.562013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.562249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.562508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.562551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.304 [2024-12-09 05:25:17.562746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.304 qpair failed and we were unable to recover it. 00:30:35.304 [2024-12-09 05:25:17.562950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.562990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.563123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.563163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.563315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.563356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.563551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.563740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.563990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.564030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.564256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.564298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.564452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.564492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.564625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.564665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.564873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.565139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.565182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.565422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.565463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.565596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.565638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.565781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.566024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.566066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.566230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.566284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.566479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.566735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.566776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.566985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.567027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.567248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.567291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.567435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.567684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.567725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.567867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.567908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.568116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.568157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.568371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.568413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.568544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.568585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.568831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.568871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.569135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.569176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.569454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.569496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.569735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.569775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.569919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.569959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.570165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.570418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.570459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.305 [2024-12-09 05:25:17.570651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.305 [2024-12-09 05:25:17.570691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.305 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.571020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.571060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.571274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.571315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.571515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.571562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.571751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.571891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.571931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.572192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.572413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.572454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.572647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.572688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.572903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.572943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.573079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.573119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.573262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.573303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.573560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.573600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.573862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.573904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.574121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.574160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.574381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.574423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.574715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.574965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.575005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.575166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.575205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.575353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.575393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.575551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.575591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.575831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.575871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.576009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.576049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.576198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.576249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.576443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.576483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.576617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.576656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.576886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.576926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.577133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.577174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.577329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.577369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.577646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.577687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.577889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.577935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.578110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.578161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.578406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.578451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.578736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.578777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.578928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.578968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.579175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.579226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.579441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.579483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.579619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.306 [2024-12-09 05:25:17.579863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.306 [2024-12-09 05:25:17.579905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.306 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.580117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.580157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.580358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.580400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.580652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.580877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.580916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.581177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.581234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.581491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.581532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.581755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.581794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.582051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.582091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.582293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.582334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.582573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.582614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.582824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.582864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.583053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.583094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.583405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.583583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.583623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.583906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.583947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.584076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.584115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.584270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.584311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.584521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.584561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.584717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.584758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.584960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.584999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.585606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.585743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.586163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.586228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.586562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.586797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.587027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.587068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.587368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.587525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.587566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.587773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.587813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.588099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.588139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.588295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.588336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.588586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.588799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.588855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.589087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.589140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.589299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.589343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.589628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.589668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.589876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.589916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.590118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.590158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.307 [2024-12-09 05:25:17.590311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.307 [2024-12-09 05:25:17.590353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.307 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.590623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.590662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.590863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.590903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.591094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.591134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.591339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.591380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.591504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.591544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.591750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.591789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.591980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.592027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.592263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.592305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.592551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.592697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.592737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.592970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.593011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.593176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.593438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.593478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.593606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.593646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.593837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.593878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.594011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.594050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.594192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.594256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.594515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.594556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.594803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.595020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.595060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.595217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.595259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.595522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.595562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.595861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.595995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.596036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.596233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.596275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.596475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.596514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.596709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.596749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.597005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.597046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.597220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.597261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.597402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.597442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.597657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.597698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.597894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.598160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.598200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.598457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.598505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.598670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.598712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.598927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.598966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.599113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.599153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.599311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.308 [2024-12-09 05:25:17.599353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.308 qpair failed and we were unable to recover it. 00:30:35.308 [2024-12-09 05:25:17.599495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.599535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.599821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.599861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.600061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.600100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.600234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.600276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.600574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.600614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.600815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.600854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.601056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.601096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.601242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.601476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.601515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.601728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.601955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.601995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.602195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.602248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.602393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.602433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.602672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.602896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.602935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.603143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.603183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.603336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.603377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.603593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.603633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.603843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.603883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.604078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.604118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.604350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.604395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.604604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.604645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.604841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.604889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.605017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.605058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.605261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.605304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.605497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.605537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.605693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.605733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.605946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.605987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.606178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.606228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.606540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.606581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.606768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.606809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.607071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.607111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.607324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.607367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.607584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.607624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.607894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.607934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.608083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.608123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.608353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.608396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.309 [2024-12-09 05:25:17.608595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.309 qpair failed and we were unable to recover it. 00:30:35.309 [2024-12-09 05:25:17.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.608791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.608987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.609027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.609250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.609292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.609596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.609637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.609868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.609909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.610050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.610090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.610360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.610402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.610607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.610647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.610858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.610899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.611039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.611080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.611274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.611460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.611502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.611659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.611699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.611897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.611938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.612169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.612222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.612444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.612485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.612618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.612658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.612848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.612890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.613039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.613082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.613345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.613387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.613578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.613618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.613814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.613854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.614048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.614088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.614252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.614294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.614514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.614561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.614695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.614734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.615013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.615052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.615228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.615270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.615473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.615512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.615702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.615742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.615971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.616011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.616205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.616260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.616402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.616442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.310 qpair failed and we were unable to recover it. 00:30:35.310 [2024-12-09 05:25:17.616580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.310 [2024-12-09 05:25:17.616620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.616766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.616805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.617076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.617116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.617303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.617345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.617567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.617606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.617834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.617874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.618098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.618139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.618286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.618328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.618460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.618501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.618738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.618778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.618910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.618950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.619233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.619274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.619420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.619460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.619654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.619695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.619869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.620182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.620236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.620429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.620677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.620718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.620861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.620913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.621147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.621188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.621377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.621418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.621632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.621672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.621894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.621934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.622070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.622110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.622308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.622368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.622606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.622759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.622800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.622941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.622981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.623193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.623249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.623399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.623439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.623635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.623674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.623928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.624104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.624145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.624345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.624387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.624670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.624710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.624967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.625008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.625265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.625308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.625574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.625614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.311 [2024-12-09 05:25:17.625815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.311 [2024-12-09 05:25:17.625854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.311 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.626042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.626345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.626386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.626660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.626700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.626844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.626885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.627117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.627157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.627313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.627354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.627587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.627631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.627894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.627934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.628091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.628131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.628360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.628402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.628612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.628652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.628790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.628830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.629080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.629230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.629272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.629494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.629534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.629736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.629776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.629898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.629938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.630136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.630175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.630476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.630517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.630673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.630713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.630996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.631037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.631315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.631545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.631584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.631720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.631760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.631906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.631946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.632170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.632219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.632355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.632395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.632651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.312 [2024-12-09 05:25:17.632691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.632909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.632949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:35.312 [2024-12-09 05:25:17.633235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.633279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.312 [2024-12-09 05:25:17.633491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.633530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.312 [2024-12-09 05:25:17.633722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.633770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.633985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.312 [2024-12-09 05:25:17.634026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.634266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.634506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.312 [2024-12-09 05:25:17.634657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.312 [2024-12-09 05:25:17.634697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.312 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.634927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.635146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.635186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.635330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.635371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.635518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.635687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.635727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.635991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.636032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.636195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.636246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.636455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.636689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.636730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.636876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.636916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.637125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.637165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.637395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.637441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.637683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.637889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.637929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.638154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.638428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.638468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.638623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.638663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.638808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.638849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.639107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.639147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.639440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.639481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.639635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.639675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.639883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.639923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.640161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.640222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.640432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.640475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.640634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.640674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.640814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.640854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.641052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.641091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.641314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.641356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.641560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.641600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.641926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.641966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.642190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.642242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.642396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.642436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.642650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.642703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.642836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.642876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.643074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.643114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.643282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.643325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.313 [2024-12-09 05:25:17.643599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.313 [2024-12-09 05:25:17.643639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.313 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.643780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.643819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.644048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.644088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.644226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.644267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.644424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.644465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.644727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.644767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.644978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.645018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.645152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.645329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.645370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.645524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.645564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.645797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.645837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.645987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.646027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.646250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.646295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.646499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.646774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.646814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.647015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.647056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.647199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.647252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.647405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.647445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.647666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.647707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.647909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.647949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.648077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.648117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.648339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.648382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.648601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.648641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.648861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.648902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.649042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.649084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.649231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.649278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.649429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.649469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.649624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.649664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.649880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.649921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.650117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.650157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.650381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.650422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.650615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.650655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.650896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.651098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.651138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.314 qpair failed and we were unable to recover it. 00:30:35.314 [2024-12-09 05:25:17.651308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.314 [2024-12-09 05:25:17.651349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.651593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.651787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.651826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.651976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.652017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.652142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.652182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.652336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.652376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.652583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.652624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.652771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.652812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.653027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.653066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.653294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.653336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.653497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.653538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.653672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.653712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.653848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.653889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.654085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.654126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.654342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.654520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.654560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.654697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.654738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.654869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.654910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.655116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.655160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.655398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.655448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.655654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.655694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.655837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.655879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.656141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.656182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.656483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.656525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.656657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.656697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.656831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.656871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.657073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.657114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.657267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.657310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.657553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.657594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.657805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.657846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.658044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.658084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.658325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.658369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.658621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.658765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.658805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.659037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.659076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.659272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.659313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.659452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.659494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.659633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.315 [2024-12-09 05:25:17.659674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.315 qpair failed and we were unable to recover it. 00:30:35.315 [2024-12-09 05:25:17.659807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.659847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.660061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.660101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.660253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.660296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.660503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.660543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.660692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.660733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.660866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.660906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.661094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.661134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.661376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.661418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.661612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.661652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.661871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.661911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.662098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.662284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.662474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.662666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.662838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.662980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.663021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.663149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.663190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.663340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.663379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.663640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.663682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.663940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.664193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.664251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.664389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.664429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.664640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.664679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.664826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.664867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.665016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.665055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.665258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.665404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.665447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.665589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.665629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.665760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.665801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.666015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.666055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.666181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.666231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.666370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.666409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.666602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.666642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.666783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.666824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.667029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.667069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.667223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.667264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.667387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.667427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.667687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.316 [2024-12-09 05:25:17.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.316 qpair failed and we were unable to recover it. 00:30:35.316 [2024-12-09 05:25:17.667868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.668169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.668220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.668400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.668603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.668644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.668839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.668879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.669095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.669139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.669365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.669408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.669534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.669573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.669722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.669762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.669895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.669936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.670080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.670119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.670313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.670355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.670495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.670747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.670787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.670984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.671025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.671159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.671415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.671455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.671680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.671721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.671876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.671917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.672060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.672101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.672246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.672288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.672420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.672460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.672646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.672796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.672836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.673040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.673080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.673217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.673259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.673414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.673454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.673594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.673633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.673841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.673882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.674070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.674112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.674274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.674322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.674523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.674563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.674715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.674756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.674898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.674939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.675073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.675114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.675256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.675298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.675442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.675484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.317 [2024-12-09 05:25:17.675618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.317 [2024-12-09 05:25:17.675658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.317 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.675799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.675840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.676040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.676081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.676232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.676272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.676507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.676643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.676684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.676865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.677060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.677100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.677241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.677283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.677496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.677536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.318 [2024-12-09 05:25:17.677747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.677788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.677917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.677965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.318 [2024-12-09 05:25:17.678123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.678173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.318 [2024-12-09 05:25:17.678411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.678601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.678644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.678850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.678891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.679099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.679277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.679317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.679550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.679591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.679736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.679776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.679916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.679956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.680146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.680185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.680344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.680387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.680519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.680565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.680688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.680729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.681010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.681049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.681190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.681243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.681392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.681432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.681658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.681699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.681842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.681882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.682076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.682115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.682251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.682292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.682429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.682470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.682677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.682717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.682981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.683020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.683156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.683196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.318 [2024-12-09 05:25:17.683337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.318 [2024-12-09 05:25:17.683378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.318 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.683529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.683570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.683772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.683811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.683961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.684001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.684137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.684177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.684324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.684364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.684564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.684604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.684810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.684849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.685050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.685090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.685225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.685266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.685396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.685436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.685708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.685842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.685882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.686104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.686144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.686346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.686473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.686513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.686646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.686686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.686890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.686930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.687149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.687344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.687512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.687552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.687758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.687798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.688019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.688059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.688223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.688264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.688470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.688510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.688737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.688777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.688916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.688956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.689229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.689277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.689492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.689531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.689731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.689771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.690003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.690143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.690457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.690497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.319 [2024-12-09 05:25:17.690631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.319 [2024-12-09 05:25:17.690670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.319 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.690794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.690834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.691089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.691230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.691271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.691481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.691522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.691719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.691759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.692015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.692055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.692318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.692360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.692562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.692602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.692861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.692901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.693029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.693070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.693274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.693507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.693546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.693692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.693732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.693943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.693983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.694108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.694147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.694382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.694422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.694562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.694603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.694732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.694770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.694984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.695240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.695418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.695579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.695771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.695942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.695983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.696174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.696226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.696419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.696458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.696659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.696700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.696830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.697007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.697046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.697245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.697286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.697494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.697534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.697726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.697766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.697976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.698016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.698154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.698199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.698342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.698382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.698580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.698620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.698759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.698799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.320 qpair failed and we were unable to recover it. 00:30:35.320 [2024-12-09 05:25:17.699080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.320 [2024-12-09 05:25:17.699120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.699266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.699308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.699505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.699544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.699742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.699781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.699925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.699963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.700119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.700159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.700298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.700339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.700532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.700572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.700762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.700800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.701009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.701049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.701185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.701235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.701468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.701780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.701820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.701954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.701993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.702130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.702168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.702303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.702343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.702549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.702589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.702808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.702848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.702979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.703019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.703142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.703181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.703497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.703538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.703671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.703710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.703973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.704013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.704174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.704229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.704451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.704492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.704687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.704728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.704865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.704906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.705043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.705215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.705257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.705410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.705450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.705664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.705704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.705900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.705940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.706174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.706227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.706542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.706674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.706714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.706851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.706891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.707117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.707164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.707442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.707501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.321 qpair failed and we were unable to recover it. 00:30:35.321 [2024-12-09 05:25:17.707715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.321 [2024-12-09 05:25:17.707760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.707959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.708000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.708205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.708258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.708545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.708586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.708801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.708842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.709149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.709190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.709348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.709389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.709606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.709647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.709773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.709812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.709942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.709983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.710264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.710308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.710509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.710549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.710767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.710809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.710948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.710990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.711223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.711270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.711415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.711455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.711721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.712014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.712055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.712313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.712356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.712546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.712748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.712788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.713023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.713063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.713221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.713263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.713457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.713498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.713713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.713753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.714005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.714059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.714225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.714269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.714473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.714514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.714656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.714696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.714914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.714954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.715147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.715187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.715423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.715465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 Malloc0 00:30:35.322 [2024-12-09 05:25:17.715728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.715768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.715973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.716013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.716229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.716270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.322 [2024-12-09 05:25:17.716402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.716442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 [2024-12-09 05:25:17.716579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.322 [2024-12-09 05:25:17.716618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.322 qpair failed and we were unable to recover it. 00:30:35.322 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:35.322 [2024-12-09 05:25:17.716852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.716899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.323 [2024-12-09 05:25:17.717124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.717163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff964000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.323 [2024-12-09 05:25:17.717413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.717462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91000 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.717627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.717670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.717824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.718063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.718105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.718318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.718360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.718619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.718660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.718851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.718892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.719151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.719191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.719476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.719517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.719751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.719790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.719994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.720033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.720245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.720287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.720537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.720577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.720729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.720769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.720902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.720942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.721132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.721171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.721387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.721428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.721637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.721677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.721874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.722118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.722157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.722375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.722417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.722684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.722723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.722859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.722899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.722986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.323 [2024-12-09 05:25:17.723103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.723143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.723424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.723465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.723734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.723774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.724052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.724092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.724305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.724347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.323 [2024-12-09 05:25:17.724558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.323 [2024-12-09 05:25:17.724598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.323 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.724823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.724862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.725177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.725224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.725426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.725466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.725626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.725665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.725891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.725931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.726122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.726162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.726358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.726400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.726728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.726768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.726979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.727025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.727241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.727282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.727487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.727527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.727751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.727790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.727985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.728025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.728292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.728334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.728620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.728660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.728805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.728845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.729097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.729137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.729429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.729470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.729685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.729725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.729940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.729980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.730190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.730246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.730505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.730544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.730754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.730794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.731041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.731242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.731284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.731488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.731528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.731721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.324 [2024-12-09 05:25:17.731761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.731969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.732009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.324 [2024-12-09 05:25:17.732307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.732348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.324 [2024-12-09 05:25:17.732624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.732663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.324 [2024-12-09 05:25:17.732808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.732848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.733057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.733097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.733310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.733351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff96c000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.733572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.733617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.733780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.733820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.324 [2024-12-09 05:25:17.734087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.324 [2024-12-09 05:25:17.734128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.324 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.734291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.734333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.734472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.734512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.734815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.734855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.735072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.735333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.735375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.735577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.735617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.735776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.735816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.736075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.736115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.736246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.736287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.736495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.736535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.736750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.736797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.737074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.737283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.737326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.737519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.737559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.737769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.738050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.738090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.738286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.738328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.738590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.738630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.738772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.738812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.738948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.738988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.739287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.739329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.739539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.739580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.325 [2024-12-09 05:25:17.739805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.739845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.739989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.740035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.325 [2024-12-09 05:25:17.740184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.740235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.740386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.325 [2024-12-09 05:25:17.740556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.740596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.325 [2024-12-09 05:25:17.740806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.740846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.740981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.741021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.741235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.741282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.741488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.741528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.741754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.741795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.325 [2024-12-09 05:25:17.742064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.325 [2024-12-09 05:25:17.742104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.325 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.742256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.742301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.742460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.742499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.742728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.742773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.742920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.742958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.743095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.743133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.743284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.743326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.743471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.743512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.743707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.743747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.743957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.743997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.744197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.744249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.744489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.744669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.744929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.744969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.745164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.745203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.745375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.745416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.745644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.745685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.745934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.746121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.746162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.746383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.746425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.746685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.746725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.746942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.746982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.747256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.747298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.747517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.747557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.747770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.590 [2024-12-09 05:25:17.747810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.748007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.748047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.590 [2024-12-09 05:25:17.748319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.748492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.590 [2024-12-09 05:25:17.748532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.748738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.748784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.749045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.749084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.749288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.749330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.749485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.749525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.749683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.749723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.749922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.749963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.590 qpair failed and we were unable to recover it. 00:30:35.590 [2024-12-09 05:25:17.750157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.590 [2024-12-09 05:25:17.750197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.750361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.591 [2024-12-09 05:25:17.750402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.750601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.591 [2024-12-09 05:25:17.750642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.750865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.591 [2024-12-09 05:25:17.750904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.751039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.591 [2024-12-09 05:25:17.751079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff960000b90 with addr=10.0.0.2, port=4420 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.751263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.591 [2024-12-09 05:25:17.753915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.754067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.754121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.754155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.754192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.754298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:35.591 [2024-12-09 05:25:17.763625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.763715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.763750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.763771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.763791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.763830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 05:25:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 662481 00:30:35.591 [2024-12-09 05:25:17.773614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.773698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.773723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.773737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.773750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.773777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.783559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.783620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.783638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.783648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.783657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.783676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.793554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.793635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.793651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.793661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.793670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.793687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.803578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.803657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.803673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.803682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.803691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.803708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.813539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.813593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.813608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.813618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.813626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.813644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.823605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.823670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.823686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.823695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.823704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.823721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.833630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.833686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.833702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.833714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.833723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.833741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.843646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.843738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.843754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.843763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.591 [2024-12-09 05:25:17.843772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.591 [2024-12-09 05:25:17.843789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.591 qpair failed and we were unable to recover it. 00:30:35.591 [2024-12-09 05:25:17.853707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.591 [2024-12-09 05:25:17.853764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.591 [2024-12-09 05:25:17.853780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.591 [2024-12-09 05:25:17.853790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.853798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.853815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.863738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.863793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.863809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.863818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.863827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.863844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.873780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.873841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.873857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.873867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.873875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.873899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.883732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.883796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.883812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.883821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.883830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.883848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.893749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.893851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.893868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.893877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.893885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.893903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.903819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.903879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.903895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.903905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.903913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.903931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.913803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.913854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.913871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.913880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.913889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.913907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.923908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.923961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.923978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.923988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.923996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.924014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.934063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.934129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.934145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.934154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.934163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.934179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.944004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.944062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.944078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.944088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.944097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.944114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.954055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.954112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.954128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.954138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.954146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.954164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.964084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.964146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.964166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.964175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.964183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.964201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.974000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.974059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.974076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.974085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.592 [2024-12-09 05:25:17.974094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.592 [2024-12-09 05:25:17.974111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.592 qpair failed and we were unable to recover it. 00:30:35.592 [2024-12-09 05:25:17.984039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.592 [2024-12-09 05:25:17.984103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.592 [2024-12-09 05:25:17.984120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.592 [2024-12-09 05:25:17.984129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:17.984138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:17.984156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:17.994053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:17.994143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:17.994159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:17.994169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:17.994177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:17.994195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:18.004073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:18.004150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:18.004166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:18.004176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:18.004187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:18.004205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:18.014093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:18.014148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:18.014164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:18.014173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:18.014182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:18.014199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:18.024225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:18.024282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:18.024298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:18.024307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:18.024315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:18.024333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:18.034213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:18.034269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:18.034284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:18.034294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:18.034302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:18.034320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.593 [2024-12-09 05:25:18.044258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.593 [2024-12-09 05:25:18.044310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.593 [2024-12-09 05:25:18.044326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.593 [2024-12-09 05:25:18.044335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.593 [2024-12-09 05:25:18.044343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.593 [2024-12-09 05:25:18.044361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.593 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.054203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.054279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.054299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.054310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.054321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.054344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.064287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.064347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.064366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.064375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.064384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.064403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.074342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.074405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.074422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.074431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.074440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.074458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.084383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.084439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.084456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.084465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.084473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.084491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.094394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.094463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.094482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.094492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.094500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.094518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.104424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.104502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.104519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.104529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.104537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.104554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.114444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.114497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.114513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.114523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.114531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.114549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.124414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.124485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.124501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.124511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.124519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.124537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.134433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.134500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.134517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.134526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.134537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.134554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.144546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.144630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.144646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.144656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.144664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.144681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.154615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.154672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.154688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.154697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.154706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.154723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.164593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.164648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.164664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.164673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.164681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.164699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.174620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.174679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.174695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.174704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.174712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.174730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.184681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.184740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.184756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.184765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.184774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.184791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.194696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.194754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.194770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.194779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.194787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.194805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.204717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.204773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.204789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.204798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.204808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.204825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.214733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.214784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.214801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.214810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.214818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.214836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.224761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.224825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.224840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.224849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.224858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.965 [2024-12-09 05:25:18.224875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-12-09 05:25:18.234800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.965 [2024-12-09 05:25:18.234858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.965 [2024-12-09 05:25:18.234874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.965 [2024-12-09 05:25:18.234883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.965 [2024-12-09 05:25:18.234892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.234909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.244749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.244807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.244823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.244832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.244840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.244858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.254837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.254896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.254912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.254921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.254930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.254947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.264905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.264974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.264990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.265002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.265010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.265028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.274927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.274982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.274998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.275008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.275016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.275034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.284978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.285049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.285065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.285074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.285083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.285100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.294963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.295019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.295036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.295045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.295054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.295072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.304961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.305017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.305033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.305042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.305050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.305071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.314959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.315014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.315030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.315040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.315049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.315066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.325044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.325099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.325117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.325128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.325137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.325154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.335073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.335130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.335146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.335155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.335164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.335181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.345108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.345168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.345184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.345193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.345201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.345225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.355127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.355190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.355211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.355220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.355229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.355247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.365090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.365148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.365164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.365173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.365181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.365199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-12-09 05:25:18.375248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:35.966 [2024-12-09 05:25:18.375354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:35.966 [2024-12-09 05:25:18.375370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:35.966 [2024-12-09 05:25:18.375379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:35.966 [2024-12-09 05:25:18.375388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:35.966 [2024-12-09 05:25:18.375407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.966 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.385262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.385324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.385341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.385351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.385360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.385378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.395199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.395271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.395287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.395299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.395308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.395326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.405237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.405291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.405308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.405317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.405326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.405344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.415244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.415298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.415314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.415323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.415332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.415350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.425346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.425409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.425425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.425434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.425443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.425460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.435367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.435421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.435437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.435446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.435454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.435476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.445369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.282 [2024-12-09 05:25:18.445431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.282 [2024-12-09 05:25:18.445446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.282 [2024-12-09 05:25:18.445456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.282 [2024-12-09 05:25:18.445464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.282 [2024-12-09 05:25:18.445482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-12-09 05:25:18.455414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.455469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.455485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.455495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.455503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.455523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.465483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.465590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.465606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.465615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.465624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.465642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.475479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.475546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.475562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.475571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.475579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.475597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.485496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.485548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.485566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.485575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.485584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.485602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.495532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.495584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.495600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.495610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.495618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.495636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.505587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.505670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.505686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.505695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.505703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.505720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.515615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.515695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.515712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.515721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.515730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.515748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.525656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.525711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.525730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.525739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.525747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.525765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.535573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.535629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.535645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.535654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.535663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.535680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.545728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.545834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.545850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.545859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.545868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.545886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.555693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.555750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.555766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.555775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.555784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.555802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.565715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.565774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.565790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.565801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.565813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.565830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.575740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.283 [2024-12-09 05:25:18.575792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.283 [2024-12-09 05:25:18.575808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.283 [2024-12-09 05:25:18.575817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.283 [2024-12-09 05:25:18.575826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.283 [2024-12-09 05:25:18.575843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-12-09 05:25:18.585809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.585915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.585931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.585940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.585949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.585966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.595805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.595863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.595880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.595890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.595899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.595916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.605873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.605951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.605967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.605976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.605984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.606002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.615864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.615918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.674530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.674595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.674632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.674711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.676045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.676154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.676206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.676277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.676310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.676369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.686091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.686189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.686243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.686269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.686293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.686340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.696077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.696157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.696184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.696201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.696234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.696271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.706102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.706163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.706186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.706197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.706213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.706236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.716139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.716197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.716218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.716227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.716235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.716253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.726115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.726173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.726189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.726199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.726212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.726236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.736229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.736287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.736303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.736312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.736321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.736339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-12-09 05:25:18.746228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.284 [2024-12-09 05:25:18.746298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.284 [2024-12-09 05:25:18.746314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.284 [2024-12-09 05:25:18.746323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.284 [2024-12-09 05:25:18.746334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.284 [2024-12-09 05:25:18.746352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.571 [2024-12-09 05:25:18.756259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.571 [2024-12-09 05:25:18.756321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.571 [2024-12-09 05:25:18.756338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.571 [2024-12-09 05:25:18.756347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.571 [2024-12-09 05:25:18.756356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.571 [2024-12-09 05:25:18.756373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-12-09 05:25:18.766268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.571 [2024-12-09 05:25:18.766371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.571 [2024-12-09 05:25:18.766387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.571 [2024-12-09 05:25:18.766396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.571 [2024-12-09 05:25:18.766405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.571 [2024-12-09 05:25:18.766423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-12-09 05:25:18.776300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.571 [2024-12-09 05:25:18.776404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.571 [2024-12-09 05:25:18.776419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.571 [2024-12-09 05:25:18.776428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.571 [2024-12-09 05:25:18.776437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.571 [2024-12-09 05:25:18.776455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-12-09 05:25:18.786327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.571 [2024-12-09 05:25:18.786406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.571 [2024-12-09 05:25:18.786422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.571 [2024-12-09 05:25:18.786431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.571 [2024-12-09 05:25:18.786440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.571 [2024-12-09 05:25:18.786456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-12-09 05:25:18.796363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.571 [2024-12-09 05:25:18.796422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.796439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.796448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.796457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.796474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.806406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.806465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.806481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.806490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.806498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.806516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.816411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.816498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.816513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.816523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.816531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.816548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.826448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.826502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.826518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.826528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.826536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.826554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.836485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.836545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.836561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.836570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.836579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.836596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.846486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.846581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.846597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.846606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.846614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.846631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.856537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.856604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.856620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.856629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.856637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.856654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.866558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.866628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.866644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.866653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.866661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.866679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.876616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.876675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.876690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.876703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.876711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.876728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.886599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.886659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.886675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.886684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.886693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.886710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.896642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.896698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.896714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.896723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.896732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.896749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.906666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.906724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.906740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.906749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.906757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.906775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.916741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.916847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.916863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.916872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.916881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.916901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.926728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.926784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.926799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.926808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.926816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.926834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.936800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.936868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.936884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.936893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.936902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.936919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.946801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.946876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.946892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.946901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.946909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.946927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.956822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.956884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.956899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.956908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.956917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.956935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.966883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.966944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.966960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.966969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.966977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.966994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.976864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.976939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.976955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.976965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.976973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.976990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.986930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.987005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.987022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.987031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.987040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.987058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:18.996982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:18.997088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:18.997105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:18.997114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:18.997122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:18.997140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:19.006958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:19.007013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:19.007032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:19.007041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.572 [2024-12-09 05:25:19.007049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.572 [2024-12-09 05:25:19.007067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-12-09 05:25:19.017026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.572 [2024-12-09 05:25:19.017083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.572 [2024-12-09 05:25:19.017099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.572 [2024-12-09 05:25:19.017108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.573 [2024-12-09 05:25:19.017116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.573 [2024-12-09 05:25:19.017133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-12-09 05:25:19.027007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.573 [2024-12-09 05:25:19.027062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.573 [2024-12-09 05:25:19.027078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.573 [2024-12-09 05:25:19.027087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.573 [2024-12-09 05:25:19.027095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.573 [2024-12-09 05:25:19.027113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-12-09 05:25:19.037054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.573 [2024-12-09 05:25:19.037112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.573 [2024-12-09 05:25:19.037128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.573 [2024-12-09 05:25:19.037137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.573 [2024-12-09 05:25:19.037146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.573 [2024-12-09 05:25:19.037163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.047091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.047180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.047197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.047210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.047219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.047241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.057148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.057215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.057235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.057247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.057255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.057273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.067128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.067188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.067205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.067219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.067228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.067246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.077186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.077252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.077268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.077277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.077286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.077305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.087104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.087211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.087227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.087236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.087245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.087262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.097192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.097268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.097284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.097294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.097302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.097319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.107229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.107290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.107306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.107315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.107324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.107342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.117267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.117323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.117339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.117349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.117357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.117375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.127310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.127366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.127382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.127392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.127400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.127418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.137351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.137409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.137428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.137437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.137446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.137463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.147312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.147370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.147386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.147396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.147404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.147422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.157382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.157439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.157455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.157465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.157473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.157490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.167345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.167403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.167419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.167428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.167436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.167454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.177390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.177447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.177463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.833 [2024-12-09 05:25:19.177472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.833 [2024-12-09 05:25:19.177484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.833 [2024-12-09 05:25:19.177501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.833 qpair failed and we were unable to recover it. 00:30:36.833 [2024-12-09 05:25:19.187520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.833 [2024-12-09 05:25:19.187631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.833 [2024-12-09 05:25:19.187647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.187656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.187664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.187682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.197503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.197562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.197578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.197587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.197596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.197614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.207470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.207531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.207546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.207556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.207564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.207582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.217540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.217593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.217609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.217618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.217627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.217644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.227571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.227627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.227643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.227652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.227660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.227678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.237622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.237681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.237696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.237705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.237714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.237731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.247629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.247684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.247700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.247709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.247718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.247736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.257651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.257707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.257722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.257732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.257740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.257757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.267615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.267673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.267693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.267702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.267710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.267727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.277739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.277798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.277814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.277823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.277831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.277849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.287733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.287790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.287806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.287816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.287824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.287842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:36.834 [2024-12-09 05:25:19.297707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:36.834 [2024-12-09 05:25:19.297766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:36.834 [2024-12-09 05:25:19.297782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:36.834 [2024-12-09 05:25:19.297791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:36.834 [2024-12-09 05:25:19.297800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:36.834 [2024-12-09 05:25:19.297817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.834 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.307792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.307847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.307862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.307875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.307884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.307901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.317835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.317908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.317924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.317933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.317942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.317959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.327839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.327893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.327909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.327918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.327927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.327945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.337875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.337939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.337955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.337964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.337973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.337990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.347933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.347991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.348007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.348016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.348025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.348042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.357939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.358019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.358035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.358044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.358052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.358069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.367990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.368047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.368063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.368072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.368080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.368098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.377961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.378020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.378036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.378045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.378054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.378072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.094 [2024-12-09 05:25:19.388003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.094 [2024-12-09 05:25:19.388061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.094 [2024-12-09 05:25:19.388077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.094 [2024-12-09 05:25:19.388086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.094 [2024-12-09 05:25:19.388094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.094 [2024-12-09 05:25:19.388112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.094 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.398040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.398101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.398118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.398126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.398135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.398152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.408068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.408127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.408143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.408152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.408161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.408178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.418089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.418147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.418162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.418171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.418180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.418197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.428123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.428180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.428195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.428204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.428217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.428234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.438160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.438225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.438240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.438256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.438264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.438282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.448189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.448249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.448265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.448275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.448283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.448301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.458229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.458286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.458302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.458311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.458320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.458337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.468245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.468305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.468321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.468330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.468339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.468356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.478275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.478332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.478349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.478358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.478366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.478387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.488343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.488400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.488416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.488425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.488433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.488451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.498333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.498387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.498403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.498412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.498421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.498439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.508386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.508444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.508460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.508469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.508477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.508495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.518378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.518437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.518453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.518462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.518470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.518488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.528411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.528470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.528486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.528495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.528504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.528521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.538448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.538500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.538516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.538525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.538533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.538551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.548396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.548455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.548471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.548480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.548489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.548506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.095 [2024-12-09 05:25:19.558492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.095 [2024-12-09 05:25:19.558549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.095 [2024-12-09 05:25:19.558564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.095 [2024-12-09 05:25:19.558574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.095 [2024-12-09 05:25:19.558582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.095 [2024-12-09 05:25:19.558599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.095 qpair failed and we were unable to recover it. 00:30:37.355 [2024-12-09 05:25:19.568470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.355 [2024-12-09 05:25:19.568528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.355 [2024-12-09 05:25:19.568548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.355 [2024-12-09 05:25:19.568557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.355 [2024-12-09 05:25:19.568566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.355 [2024-12-09 05:25:19.568584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.355 qpair failed and we were unable to recover it. 00:30:37.355 [2024-12-09 05:25:19.578551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.355 [2024-12-09 05:25:19.578602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.355 [2024-12-09 05:25:19.578618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.355 [2024-12-09 05:25:19.578627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.355 [2024-12-09 05:25:19.578636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.355 [2024-12-09 05:25:19.578653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.355 qpair failed and we were unable to recover it. 00:30:37.355 [2024-12-09 05:25:19.588538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.355 [2024-12-09 05:25:19.588627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.355 [2024-12-09 05:25:19.588643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.355 [2024-12-09 05:25:19.588652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.355 [2024-12-09 05:25:19.588660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.355 [2024-12-09 05:25:19.588678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.355 qpair failed and we were unable to recover it. 00:30:37.355 [2024-12-09 05:25:19.598648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.355 [2024-12-09 05:25:19.598706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.355 [2024-12-09 05:25:19.598722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.355 [2024-12-09 05:25:19.598731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.355 [2024-12-09 05:25:19.598739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.355 [2024-12-09 05:25:19.598756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.355 qpair failed and we were unable to recover it. 00:30:37.355 [2024-12-09 05:25:19.608622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.355 [2024-12-09 05:25:19.608680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.355 [2024-12-09 05:25:19.608696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.355 [2024-12-09 05:25:19.608705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.608713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.608734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.618664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.618723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.618738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.618747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.618756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.618773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.628673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.628752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.628767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.628776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.628785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.628802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.638743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.638799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.638815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.638824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.638833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.638850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.648740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.648795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.648810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.648820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.648828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.648845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.658761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.658817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.658832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.658841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.658849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.658867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.668803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.668854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.668870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.668879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.668887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.668904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.678813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.678868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.678884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.678893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.678901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.678919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.688880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.688942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.688958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.688967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.688976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.688994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.698898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.698952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.698972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.698981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.698990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.699007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.708916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.708971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.708987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.708997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.709005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.709022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.718946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.719002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.719018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.719028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.719036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.719053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.728979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.729038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.729055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.729065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.729074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.729092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.739000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.739053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.739068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.739077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.739088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.739106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.749079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.749135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.749151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.749160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.749169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.749187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.759063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.759127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.759143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.759152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.759161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.759178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.769135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.769193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.769213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.769227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.769236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.769254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.779125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.779193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.779212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.779222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.779230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.779248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.789138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.789195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.789218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.789230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.789238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.789257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.799191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.799255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.799270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.799279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.799288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.799306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.809245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.809321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.809336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.809345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.809354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.809371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.356 [2024-12-09 05:25:19.819243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.356 [2024-12-09 05:25:19.819297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.356 [2024-12-09 05:25:19.819313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.356 [2024-12-09 05:25:19.819322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.356 [2024-12-09 05:25:19.819330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.356 [2024-12-09 05:25:19.819347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.356 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.829252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.829305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.829323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.829333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.829341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.829358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.839327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.839430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.839445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.839454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.839463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.839480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.849310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.849369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.849384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.849393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.849402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.849420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.859350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.859408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.859424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.859433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.859441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.859459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.869363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.869418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.869434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.869446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.869454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.869472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.879403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.879461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.879478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.879487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.879496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.879514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.889444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.889499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.889515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.889524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.889533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.889550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.899378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.899431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.899447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.899456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.899465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.899482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.909537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.909599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.909615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.909624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.909632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.909649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.919492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.919558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.919573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.919582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.919591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.919608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.929544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.929604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.929620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.929629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.929637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.929655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.939714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.939783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.939799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.939808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.939816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.939834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.949672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.949732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.949748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.949757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.949765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.949783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.959693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.617 [2024-12-09 05:25:19.959803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.617 [2024-12-09 05:25:19.959819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.617 [2024-12-09 05:25:19.959828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.617 [2024-12-09 05:25:19.959836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.617 [2024-12-09 05:25:19.959853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.617 qpair failed and we were unable to recover it. 00:30:37.617 [2024-12-09 05:25:19.969729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:19.969784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:19.969800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:19.969809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:19.969817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:19.969834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:19.979695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:19.979754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:19.979770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:19.979779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:19.979788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:19.979805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:19.989717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:19.989773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:19.989789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:19.989798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:19.989806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:19.989823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:19.999693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:19.999751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:19.999766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:19.999778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:19.999786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:19.999804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.009909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.010027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.010079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.010099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.010111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.010168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.019804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.019858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.019875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.019884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.019893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.019910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.029848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.029902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.029918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.029927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.029936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.029954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.039865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.039923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.039940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.039951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.039960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.039983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.049904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.049960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.049976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.049985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.049994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.050011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.059941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.059994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.060010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.060018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.060027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.060044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.069954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.070009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.070025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.070034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.070043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.070060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.618 [2024-12-09 05:25:20.079988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.618 [2024-12-09 05:25:20.080045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.618 [2024-12-09 05:25:20.080061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.618 [2024-12-09 05:25:20.080070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.618 [2024-12-09 05:25:20.080079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.618 [2024-12-09 05:25:20.080096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.618 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.090028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.090091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.090106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.090115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.090123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.090141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.100055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.100109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.100125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.100134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.100143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.100160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.110073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.110126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.110141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.110150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.110158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.110176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.120071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.120138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.120154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.120163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.120171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.120188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.130131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.130187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.130205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.130219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.130228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.130245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.140164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.140227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.140242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.140251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.140260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.140277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.150201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.150278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.150293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.150302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.150311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.150328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.160218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.160273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.160289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.160297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.160306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.160323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.170229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.170298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.170313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.170322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.170337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.170354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.180199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.180253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.180269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.180278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.180286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.180304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.190299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.190359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.190376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.190385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.190394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.190413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.200250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.200350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.200367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.200376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.200385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.200402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.210355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.210434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.210451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.210460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.210468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.210486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.220381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.220443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.220460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.220469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.220478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.220495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.230375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.230471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.230487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.230497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.230505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.878 [2024-12-09 05:25:20.230523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.878 qpair failed and we were unable to recover it. 00:30:37.878 [2024-12-09 05:25:20.240482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.878 [2024-12-09 05:25:20.240585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.878 [2024-12-09 05:25:20.240602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.878 [2024-12-09 05:25:20.240611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.878 [2024-12-09 05:25:20.240619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.240637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.250469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.250541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.250556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.250566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.250574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.250592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.260415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.260469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.260488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.260497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.260506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.260523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.270534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.270587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.270603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.270612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.270621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.270639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.280557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.280615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.280631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.280641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.280649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.280667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.290608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.290693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.290710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.290720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.290729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.290746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.300532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.300596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.300613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.300622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.300634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.300652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.310558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.310612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.310628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.310637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.310646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.310664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.320705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.320766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.320782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.320791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.320799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.320817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.330685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.330747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.330765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.330776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.330787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.330807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:37.879 [2024-12-09 05:25:20.340707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:37.879 [2024-12-09 05:25:20.340765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:37.879 [2024-12-09 05:25:20.340781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:37.879 [2024-12-09 05:25:20.340790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:37.879 [2024-12-09 05:25:20.340798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:37.879 [2024-12-09 05:25:20.340816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:37.879 qpair failed and we were unable to recover it. 00:30:38.144 [2024-12-09 05:25:20.350706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.144 [2024-12-09 05:25:20.350770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.144 [2024-12-09 05:25:20.350787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.350796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.350804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.350822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.360773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.360854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.360870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.360879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.360888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.360904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.370800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.370857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.370873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.370882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.370890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.370908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.380825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.380883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.380900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.380909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.380917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.380934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.390850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.390906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.390924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.390934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.390942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.390959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.400878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.400930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.400946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.400955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.400963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.400981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.410905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.410963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.410978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.410988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.410996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.411014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.420923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.420976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.420993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.421002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.421010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.421027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.431003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.431057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.431073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.431085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.431093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.431110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.441010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.441115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.441132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.441141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.441150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.441168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.451012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.451065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.451081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.451090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.451099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.451117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.461071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.461135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.461151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.461160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.461169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.461187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.471077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.471135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.471151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.471160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.471168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.471186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.481102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.481159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.481175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.481185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.481193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.481215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.491131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.491191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.491210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.491219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.491228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.491246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.501148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.501227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.501243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.501253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.501261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.501278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.511222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.511276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.511292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.511301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.511310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.511327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.521217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.521276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.521291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.521300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.521309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.521327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.531247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.531330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.531346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.531355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.531364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.531381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.541279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.541336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.541353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.541362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.541370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.541388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.551283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.551363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.551382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.551394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.551403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.145 [2024-12-09 05:25:20.551421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-12-09 05:25:20.561328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.145 [2024-12-09 05:25:20.561385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.145 [2024-12-09 05:25:20.561401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.145 [2024-12-09 05:25:20.561413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.145 [2024-12-09 05:25:20.561422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.561439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-12-09 05:25:20.571345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.146 [2024-12-09 05:25:20.571403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.146 [2024-12-09 05:25:20.571419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.146 [2024-12-09 05:25:20.571428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.146 [2024-12-09 05:25:20.571437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.571455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-12-09 05:25:20.581413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.146 [2024-12-09 05:25:20.581472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.146 [2024-12-09 05:25:20.581488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.146 [2024-12-09 05:25:20.581497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.146 [2024-12-09 05:25:20.581505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.581523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-12-09 05:25:20.591416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.146 [2024-12-09 05:25:20.591468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.146 [2024-12-09 05:25:20.591484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.146 [2024-12-09 05:25:20.591493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.146 [2024-12-09 05:25:20.591501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.591519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-12-09 05:25:20.601444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.146 [2024-12-09 05:25:20.601517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.146 [2024-12-09 05:25:20.601533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.146 [2024-12-09 05:25:20.601542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.146 [2024-12-09 05:25:20.601550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.601570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-12-09 05:25:20.611485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.146 [2024-12-09 05:25:20.611541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.146 [2024-12-09 05:25:20.611558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.146 [2024-12-09 05:25:20.611567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.146 [2024-12-09 05:25:20.611575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.146 [2024-12-09 05:25:20.611593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.621501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.621560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.621577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.621586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.621594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.621611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.631512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.631565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.631581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.631591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.631599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.631616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.641555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.641615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.641631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.641640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.641649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.641666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.651611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.651671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.651687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.651696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.651705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.651722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.661603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.661669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.661684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.661693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.661702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.661719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.671625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.671683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.671699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.671708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.671716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.671733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.681655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.681759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.681775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.681784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.681793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.681810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.691734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.691793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.691815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.691825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.691835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.691853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.701745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.701832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.701849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.701858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.701866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.701883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.711736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.711792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.711807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.711816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.711824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.711842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.721720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.721776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.721792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.721802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.721810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.406 [2024-12-09 05:25:20.721828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.406 qpair failed and we were unable to recover it. 00:30:38.406 [2024-12-09 05:25:20.731729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.406 [2024-12-09 05:25:20.731835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.406 [2024-12-09 05:25:20.731851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.406 [2024-12-09 05:25:20.731861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.406 [2024-12-09 05:25:20.731872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.731890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.741836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.741908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.741924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.741933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.741942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.741959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.751853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.751904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.751920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.751929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.751937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.751954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.761879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.761938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.761954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.761964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.761972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.761989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.771942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.771998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.772014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.772023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.772031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.772048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.781878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.781929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.781945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.781954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.781962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.781980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.791974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.792048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.792065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.792077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.792087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.792108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.801997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.802057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.802073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.802082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.802091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.802109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.812052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.812126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.812142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.812151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.812160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.812177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.822081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.822138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.822156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.822165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.822173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.822191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.832020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.832076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.832091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.832100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.832108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.832125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.842066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.842143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.842158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.842167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.842175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.842192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.852080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.852140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.852155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.852164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.852173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.852190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.862198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.862283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.862298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.862307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.862321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.862340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.407 [2024-12-09 05:25:20.872128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.407 [2024-12-09 05:25:20.872177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.407 [2024-12-09 05:25:20.872192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.407 [2024-12-09 05:25:20.872201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.407 [2024-12-09 05:25:20.872214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.407 [2024-12-09 05:25:20.872232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.407 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.882167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.882230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.882246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.882256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.882264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.882282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.892222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.892316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.892332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.892341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.892349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.892367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.902221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.902277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.902293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.902301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.902310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.902327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.912269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.912361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.912377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.912386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.912394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.912411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.922435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.922512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.922528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.922537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.922545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.922563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.932366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.932424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.932440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.932448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.932457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.932473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.942341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.942443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.942458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.942467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.942475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.942492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.952475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.952582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.952601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.952610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.952618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.952635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.962476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.962532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.962548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.962557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.962566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.962583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.667 [2024-12-09 05:25:20.972498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.667 [2024-12-09 05:25:20.972553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.667 [2024-12-09 05:25:20.972567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.667 [2024-12-09 05:25:20.972576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.667 [2024-12-09 05:25:20.972585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.667 [2024-12-09 05:25:20.972602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.667 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:20.982540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:20.982592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:20.982608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:20.982617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:20.982626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:20.982643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:20.992502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:20.992591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:20.992606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:20.992618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:20.992626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:20.992643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.002583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.002639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.002654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.002663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.002672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.002689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.012600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.012661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.012676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.012685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.012693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.012711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.022562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.022613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.022628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.022637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.022645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.022662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.032666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.032719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.032733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.032743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.032751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.032771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.042619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.042673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.042688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.042696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.042705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.042721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.052637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.052696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.052712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.052721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.052729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.052746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.062745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.062799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.062814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.062823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.062831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.062848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.072702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.072757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.072773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.072782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.072790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.072807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.082816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.082876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.082891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.082900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.082908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.082925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.092831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.092903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.092918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.092927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.092935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.092952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.668 [2024-12-09 05:25:21.102803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.668 [2024-12-09 05:25:21.102861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.668 [2024-12-09 05:25:21.102876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.668 [2024-12-09 05:25:21.102885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.668 [2024-12-09 05:25:21.102894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.668 [2024-12-09 05:25:21.102911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.668 qpair failed and we were unable to recover it. 00:30:38.669 [2024-12-09 05:25:21.112942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.669 [2024-12-09 05:25:21.113001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.669 [2024-12-09 05:25:21.113016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.669 [2024-12-09 05:25:21.113025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.669 [2024-12-09 05:25:21.113033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.669 [2024-12-09 05:25:21.113050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.669 qpair failed and we were unable to recover it. 00:30:38.669 [2024-12-09 05:25:21.122919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.669 [2024-12-09 05:25:21.122975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.669 [2024-12-09 05:25:21.122991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.669 [2024-12-09 05:25:21.123003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.669 [2024-12-09 05:25:21.123011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.669 [2024-12-09 05:25:21.123028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.669 qpair failed and we were unable to recover it. 00:30:38.669 [2024-12-09 05:25:21.132883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.669 [2024-12-09 05:25:21.132936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.669 [2024-12-09 05:25:21.132951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.669 [2024-12-09 05:25:21.132960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.669 [2024-12-09 05:25:21.132969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.669 [2024-12-09 05:25:21.132986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.669 qpair failed and we were unable to recover it. 00:30:38.928 [2024-12-09 05:25:21.142969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.928 [2024-12-09 05:25:21.143024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.928 [2024-12-09 05:25:21.143040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.928 [2024-12-09 05:25:21.143049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.928 [2024-12-09 05:25:21.143057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.928 [2024-12-09 05:25:21.143075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.928 qpair failed and we were unable to recover it. 00:30:38.928 [2024-12-09 05:25:21.153003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.928 [2024-12-09 05:25:21.153053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.928 [2024-12-09 05:25:21.153068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.928 [2024-12-09 05:25:21.153077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.928 [2024-12-09 05:25:21.153086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.928 [2024-12-09 05:25:21.153103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.928 qpair failed and we were unable to recover it. 00:30:38.928 [2024-12-09 05:25:21.163050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.928 [2024-12-09 05:25:21.163106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.928 [2024-12-09 05:25:21.163122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.928 [2024-12-09 05:25:21.163131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.928 [2024-12-09 05:25:21.163139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.928 [2024-12-09 05:25:21.163159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.928 qpair failed and we were unable to recover it. 00:30:38.928 [2024-12-09 05:25:21.173058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.928 [2024-12-09 05:25:21.173116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.928 [2024-12-09 05:25:21.173131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.928 [2024-12-09 05:25:21.173140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.928 [2024-12-09 05:25:21.173148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.928 [2024-12-09 05:25:21.173165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.928 qpair failed and we were unable to recover it. 00:30:38.928 [2024-12-09 05:25:21.183104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.928 [2024-12-09 05:25:21.183164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.928 [2024-12-09 05:25:21.183180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.183189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.183198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.183219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.193115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.193171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.193187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.193196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.193204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.193225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.203163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.203222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.203237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.203247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.203255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.203272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.213215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.213275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.213290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.213299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.213307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.213325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.223166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.223223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.223238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.223247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.223255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.223272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.233248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.233315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.233330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.233339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.233348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.233364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.243273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.243332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.243347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.243356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.243365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.243382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.253236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.253292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.253311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.253320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.253328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.253345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.263370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.263430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.263445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.263454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.263462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.263479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.273379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.273434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.273449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.273458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.273467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.273484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.283388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.283447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.283462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.283471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.283479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.283495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.293410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.293466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.293481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.293490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.293502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.293519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.303414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.303469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.303484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.303493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.303502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.303519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.313464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.313522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.313537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.313546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.313554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.313572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.323513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.323568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.323583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.323593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.323601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.323618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.333552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.333636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.333651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.333660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.333668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.333685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.343572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.343628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.343644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.343653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.343661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.343679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.353574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.353631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.353646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.353656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.353664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.353681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.363610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.363669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.363684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.363693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.363701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.363718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.373624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.373686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.373701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.373710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.929 [2024-12-09 05:25:21.373719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.929 [2024-12-09 05:25:21.373736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.929 qpair failed and we were unable to recover it. 00:30:38.929 [2024-12-09 05:25:21.383663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.929 [2024-12-09 05:25:21.383714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.929 [2024-12-09 05:25:21.383732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.929 [2024-12-09 05:25:21.383740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.945 [2024-12-09 05:25:21.383749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.945 [2024-12-09 05:25:21.383766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.945 qpair failed and we were unable to recover it. 00:30:38.945 [2024-12-09 05:25:21.393661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.945 [2024-12-09 05:25:21.393717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.945 [2024-12-09 05:25:21.393732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.945 [2024-12-09 05:25:21.393741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.945 [2024-12-09 05:25:21.393749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:38.945 [2024-12-09 05:25:21.393766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:38.945 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.403725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.403785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.403800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.403810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.403818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.403835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.413739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.413795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.413810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.413819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.413828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.413845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.423762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.423822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.423837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.423847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.423858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.423876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.433791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.433846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.433861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.433870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.433878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.433895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.443838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.443901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.443916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.443926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.443934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.443952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.453859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.453915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.453930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.453939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.453947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.453964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.463905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.463961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.463976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.463985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.463993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.464010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.473916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.473972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.473988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.473997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.474005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.474022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.483939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.483994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.484009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.484018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.484026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.484043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.493969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.494022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.494037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.494046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.494054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.494071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.503996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.206 [2024-12-09 05:25:21.504054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.206 [2024-12-09 05:25:21.504069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.206 [2024-12-09 05:25:21.504078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.206 [2024-12-09 05:25:21.504087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.206 [2024-12-09 05:25:21.504104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.206 qpair failed and we were unable to recover it. 00:30:39.206 [2024-12-09 05:25:21.514047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.514098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.514116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.514125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.514133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.514150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.524066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.524123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.524138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.524147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.524155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.524172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.534084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.534138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.534154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.534163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.534171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.534188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.544132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.544238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.544253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.544262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.544270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.544288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.554134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.554189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.554204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.554223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.554232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.554249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.564184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.564248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.564263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.564272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.564280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.564298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.574203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.574294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.574310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.574319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.574327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.574345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.584223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.584278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.584294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.584303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.584312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.584329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.594246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.594297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.594312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.594321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.594330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.594350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.604279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.604339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.604354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.604363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.604371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.604388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.614308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.614360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.614375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.614384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.614393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.614410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.624342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.624403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.624417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.624426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.624435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.624452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.634370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.634421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.634435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.634444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.634453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.634469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.644464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.644522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.644537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.644545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.644554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.644571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.654435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.654495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.654510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.654519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.654528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.654545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.207 [2024-12-09 05:25:21.664461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.207 [2024-12-09 05:25:21.664512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.207 [2024-12-09 05:25:21.664527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.207 [2024-12-09 05:25:21.664536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.207 [2024-12-09 05:25:21.664544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.207 [2024-12-09 05:25:21.664561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.207 qpair failed and we were unable to recover it. 00:30:39.468 [2024-12-09 05:25:21.674495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.468 [2024-12-09 05:25:21.674545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.468 [2024-12-09 05:25:21.674559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.468 [2024-12-09 05:25:21.674568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.468 [2024-12-09 05:25:21.674577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.468 [2024-12-09 05:25:21.674593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.468 qpair failed and we were unable to recover it. 00:30:39.468 [2024-12-09 05:25:21.684524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.684584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.684599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.684611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.684619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.684636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.694546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.694605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.694619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.694628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.694637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.694654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.704581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.704665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.704680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.704689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.704697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.704714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.714594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.714646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.714661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.714670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.714678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.714695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.724630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.724718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.724733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.724742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.724750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.724770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.734646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.734703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.734717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.734726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.734734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.734751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.744681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.744737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.744752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.744761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.744769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.744786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.754700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.754754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.754769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.754778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.754787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.754804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.764732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.764787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.764802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.764810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.764819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.764836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.774763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.774847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.774862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.774871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.774879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.774897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.784788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.784843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.784858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.784867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.784876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.784893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.794814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.794870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.794884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.794894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.794902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.794919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.804876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.804984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.805000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.805009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.805018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.805035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.814874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.814933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.814952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.814961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.814970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.814987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.824921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.824974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.824990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.824999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.825007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.825024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.834919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.834977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.834993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.835001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.835010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.835027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.844948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.845003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.845017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.845026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.845034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.845052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.854964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.855019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.855034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.855043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.855054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.855071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.864995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.865050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.865066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.865075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.865083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.865100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.875051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.875115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.875131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.875140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.875148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.875165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.469 [2024-12-09 05:25:21.885090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.469 [2024-12-09 05:25:21.885148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.469 [2024-12-09 05:25:21.885164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.469 [2024-12-09 05:25:21.885173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.469 [2024-12-09 05:25:21.885182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.469 [2024-12-09 05:25:21.885199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.469 qpair failed and we were unable to recover it. 00:30:39.470 [2024-12-09 05:25:21.895085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.470 [2024-12-09 05:25:21.895166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.470 [2024-12-09 05:25:21.895182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.470 [2024-12-09 05:25:21.895192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.470 [2024-12-09 05:25:21.895200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.470 [2024-12-09 05:25:21.895221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.470 qpair failed and we were unable to recover it. 00:30:39.470 [2024-12-09 05:25:21.905113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.470 [2024-12-09 05:25:21.905173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.470 [2024-12-09 05:25:21.905188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.470 [2024-12-09 05:25:21.905197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.470 [2024-12-09 05:25:21.905205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.470 [2024-12-09 05:25:21.905226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.470 qpair failed and we were unable to recover it. 00:30:39.470 [2024-12-09 05:25:21.915138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.470 [2024-12-09 05:25:21.915189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.470 [2024-12-09 05:25:21.915204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.470 [2024-12-09 05:25:21.915217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.470 [2024-12-09 05:25:21.915225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.470 [2024-12-09 05:25:21.915242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.470 qpair failed and we were unable to recover it. 00:30:39.470 [2024-12-09 05:25:21.925222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.470 [2024-12-09 05:25:21.925316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.470 [2024-12-09 05:25:21.925330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.470 [2024-12-09 05:25:21.925339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.470 [2024-12-09 05:25:21.925347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.470 [2024-12-09 05:25:21.925364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.470 qpair failed and we were unable to recover it. 00:30:39.470 [2024-12-09 05:25:21.935198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.470 [2024-12-09 05:25:21.935277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.470 [2024-12-09 05:25:21.935292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.470 [2024-12-09 05:25:21.935301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.470 [2024-12-09 05:25:21.935309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.470 [2024-12-09 05:25:21.935327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.470 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.945234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.945291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.945309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.945319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.945327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.945344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.955258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.955316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.955332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.955341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.955349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.955367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.965300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.965356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.965371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.965380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.965388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.965405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.975337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.975415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.975430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.975439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.975447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.975464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.985395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.985460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.985475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.985483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.985495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.985512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:21.995367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:21.995430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:21.995445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:21.995454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:21.995462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:21.995480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:22.005428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:22.005489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:22.005504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:22.005513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:22.005521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:22.005538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:22.015433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:22.015522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:22.015537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:22.015545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:22.015554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:22.015571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:22.025462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:22.025553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:22.025568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:22.025577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:22.025585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.730 [2024-12-09 05:25:22.025602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.730 qpair failed and we were unable to recover it. 00:30:39.730 [2024-12-09 05:25:22.035514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.730 [2024-12-09 05:25:22.035568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.730 [2024-12-09 05:25:22.035583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.730 [2024-12-09 05:25:22.035592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.730 [2024-12-09 05:25:22.035600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.035617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.045527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.045610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.045626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.045635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.045643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.045659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.055565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.055648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.055663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.055672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.055680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.055697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.065624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.065687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.065709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.065719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.065727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.065750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.075594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.075650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.075669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.075678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.075686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.075704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.085628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.085686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.085702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.085711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.085719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.085737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.095646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.095701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.095716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.095725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.095734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.095751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.105683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.105772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.105787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.105796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.105805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.105822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.115699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.115752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.115767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.115779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.115788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.115805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.125742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.125802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.125817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.125826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.125834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.125851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.135752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.135808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.135824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.135833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.135841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.135859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.145799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.145872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.145887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.145896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.145904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.145922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.155899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.155989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.156004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.156013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.156021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.156041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.165858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.165918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.165934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.165943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.165951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.165968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.175885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.175940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.175955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.175964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.175972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.175989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.185927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.185978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.185993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.186002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.186010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.186027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.731 [2024-12-09 05:25:22.195911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.731 [2024-12-09 05:25:22.195968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.731 [2024-12-09 05:25:22.195984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.731 [2024-12-09 05:25:22.195993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.731 [2024-12-09 05:25:22.196001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.731 [2024-12-09 05:25:22.196019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.731 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.205980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.206041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.206057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.206066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.206074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.206092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.216002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.216080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.216095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.216104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.216113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.216130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.226066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.226121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.226137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.226147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.226155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.226173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.236052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.236110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.236126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.236135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.236143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.236160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.246090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.246147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.246162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.246174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.246182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.246200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.256113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.256171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.256186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.256195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.256204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.256225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.266143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.266200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.266219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.266228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.266236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.266254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.276216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.276313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.276328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.276338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.276346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.276363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.286114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.286169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.286184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.286193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.286202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.286229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.296250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.296327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.296342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.296351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.296360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.296377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.306249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.306306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.306322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.306332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.306341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.306360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.316274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.316326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.316341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.316350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.316358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.992 [2024-12-09 05:25:22.316375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.992 qpair failed and we were unable to recover it. 00:30:39.992 [2024-12-09 05:25:22.326305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.992 [2024-12-09 05:25:22.326365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.992 [2024-12-09 05:25:22.326380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.992 [2024-12-09 05:25:22.326389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.992 [2024-12-09 05:25:22.326397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.326414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.336317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.336379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.336394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.336402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.336412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.336429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.346285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.346342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.346358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.346367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.346375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.346392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.356380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.356436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.356452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.356461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.356469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.356486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.366358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.366416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.366432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.366441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.366449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.366465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.376441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.376497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.376514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.376524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.376532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.376549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.386469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.386522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.386537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.386546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.386555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.386572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.396440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.396495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.396510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.396519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.396528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.396545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.406559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.406669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.406684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.406693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.406701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.406718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.416552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.416640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.416655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.416664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.416675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.416692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.426621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.426675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.426690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.426699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.426708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.426724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.436542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.436599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.436614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.436623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.436631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.436647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.446643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.446717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.446733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.446741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.446750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.993 [2024-12-09 05:25:22.446766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.993 qpair failed and we were unable to recover it. 00:30:39.993 [2024-12-09 05:25:22.456652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.993 [2024-12-09 05:25:22.456713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.993 [2024-12-09 05:25:22.456729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.993 [2024-12-09 05:25:22.456738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.993 [2024-12-09 05:25:22.456746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:39.994 [2024-12-09 05:25:22.456763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.994 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.466629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.466695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.466711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.466720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.466728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.466745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.476715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.476771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.476786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.476795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.476803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.476820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.486728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.486786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.486802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.486811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.486819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.486836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.496775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.496860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.496875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.496884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.496893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.496910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.506805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.506862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.506881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.506890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.506898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.506915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.516802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.516858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.516873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.516882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.516891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.516908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.526828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.526887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.526903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.526912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.526920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.526938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.536888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.536970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.255 [2024-12-09 05:25:22.536985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.255 [2024-12-09 05:25:22.536994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.255 [2024-12-09 05:25:22.537002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.255 [2024-12-09 05:25:22.537019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.255 qpair failed and we were unable to recover it. 00:30:40.255 [2024-12-09 05:25:22.546906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.255 [2024-12-09 05:25:22.546960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.546975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.546984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.546995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.547012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.556940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.556997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.557012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.557021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.557029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.557046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.566976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.567035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.567051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.567060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.567068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.567086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.576937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.577028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.577044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.577052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.577061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.577078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.587036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.587090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.587105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.587115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.587123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.587141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.597086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.597140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.597155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.597164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.597172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.597189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.607017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.607077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.607093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.607102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.607110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.607127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.617111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.617166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.617181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.617190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.617198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.617232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.627152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.627214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.627230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.627239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.627247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.627265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.637119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.637172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.637190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.637199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.637211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.637230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.647155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.647217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.647232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.647241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.647250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.647267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.657255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.657314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.657329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.657338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.657347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.657364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.667185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.667247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.667264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.256 [2024-12-09 05:25:22.667273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.256 [2024-12-09 05:25:22.667281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.256 [2024-12-09 05:25:22.667298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.256 qpair failed and we were unable to recover it. 00:30:40.256 [2024-12-09 05:25:22.677225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.256 [2024-12-09 05:25:22.677282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.256 [2024-12-09 05:25:22.677297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.257 [2024-12-09 05:25:22.677310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.257 [2024-12-09 05:25:22.677318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.257 [2024-12-09 05:25:22.677336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.257 qpair failed and we were unable to recover it. 00:30:40.257 [2024-12-09 05:25:22.687324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.257 [2024-12-09 05:25:22.687384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.257 [2024-12-09 05:25:22.687399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.257 [2024-12-09 05:25:22.687408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.257 [2024-12-09 05:25:22.687416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.257 [2024-12-09 05:25:22.687433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.257 qpair failed and we were unable to recover it. 00:30:40.257 [2024-12-09 05:25:22.697371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.257 [2024-12-09 05:25:22.697455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.257 [2024-12-09 05:25:22.697470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.257 [2024-12-09 05:25:22.697479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.257 [2024-12-09 05:25:22.697488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.257 [2024-12-09 05:25:22.697505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.257 qpair failed and we were unable to recover it. 00:30:40.257 [2024-12-09 05:25:22.707371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.257 [2024-12-09 05:25:22.707424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.257 [2024-12-09 05:25:22.707439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.257 [2024-12-09 05:25:22.707448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.257 [2024-12-09 05:25:22.707456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.257 [2024-12-09 05:25:22.707474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.257 qpair failed and we were unable to recover it. 00:30:40.257 [2024-12-09 05:25:22.717326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.257 [2024-12-09 05:25:22.717384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.257 [2024-12-09 05:25:22.717399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.257 [2024-12-09 05:25:22.717408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.257 [2024-12-09 05:25:22.717416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.257 [2024-12-09 05:25:22.717437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.257 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.727359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.727421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.727437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.727446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.727454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.727471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.737420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.737476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.737491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.737500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.737508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.737525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.747395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.747498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.747514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.747523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.747531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.747548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.757435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.757487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.757502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.757510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.757519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.757536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.767530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.767589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.767604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.767613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.767621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.767639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.777484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.777545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.777560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.777569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.777578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.777595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.787558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.787621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.787637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.787646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.787654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.787671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.797533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.797609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.797624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.797633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.797641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.518 [2024-12-09 05:25:22.797658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.518 qpair failed and we were unable to recover it. 00:30:40.518 [2024-12-09 05:25:22.807697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.518 [2024-12-09 05:25:22.807755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.518 [2024-12-09 05:25:22.807770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.518 [2024-12-09 05:25:22.807782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.518 [2024-12-09 05:25:22.807791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.807808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.817693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.817753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.817769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.817778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.817786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.817803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.827687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.827742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.827756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.827765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.827774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.827791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.837721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.837774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.837788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.837797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.837806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.837822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.847692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.847746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.847760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.847769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.847778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.847798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.857723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.857782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.857796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.857805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.857814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.857831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.867744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.867808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.867824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.867833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.867841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.867858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.877875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.877942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.877957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.877967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.877975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.877993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.887901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.887959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.887974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.887983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.887992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.888009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.897906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.897998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.898013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.898022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.898030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.898047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.907967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.908021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.908036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.908045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.908053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.908071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.917905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.917963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.917978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.917987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.917995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.918012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.928010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.928069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.928084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.928093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.519 [2024-12-09 05:25:22.928101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.519 [2024-12-09 05:25:22.928118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.519 qpair failed and we were unable to recover it. 00:30:40.519 [2024-12-09 05:25:22.938012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.519 [2024-12-09 05:25:22.938068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.519 [2024-12-09 05:25:22.938086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.519 [2024-12-09 05:25:22.938095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.520 [2024-12-09 05:25:22.938103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.520 [2024-12-09 05:25:22.938120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.520 [2024-12-09 05:25:22.948053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.520 [2024-12-09 05:25:22.948112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.520 [2024-12-09 05:25:22.948127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.520 [2024-12-09 05:25:22.948136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.520 [2024-12-09 05:25:22.948144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.520 [2024-12-09 05:25:22.948161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.520 [2024-12-09 05:25:22.958068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.520 [2024-12-09 05:25:22.958149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.520 [2024-12-09 05:25:22.958164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.520 [2024-12-09 05:25:22.958174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.520 [2024-12-09 05:25:22.958182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.520 [2024-12-09 05:25:22.958199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.520 [2024-12-09 05:25:22.968142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.520 [2024-12-09 05:25:22.968200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.520 [2024-12-09 05:25:22.968220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.520 [2024-12-09 05:25:22.968229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.520 [2024-12-09 05:25:22.968237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.520 [2024-12-09 05:25:22.968254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.520 [2024-12-09 05:25:22.978128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.520 [2024-12-09 05:25:22.978227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.520 [2024-12-09 05:25:22.978243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.520 [2024-12-09 05:25:22.978252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.520 [2024-12-09 05:25:22.978266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.520 [2024-12-09 05:25:22.978283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.520 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:22.988179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:22.988240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:22.988255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:22.988265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.780 [2024-12-09 05:25:22.988273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.780 [2024-12-09 05:25:22.988291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:22.998110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:22.998204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:22.998224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:22.998232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.780 [2024-12-09 05:25:22.998241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.780 [2024-12-09 05:25:22.998258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:23.008225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:23.008281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:23.008296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:23.008305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.780 [2024-12-09 05:25:23.008314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.780 [2024-12-09 05:25:23.008331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:23.018253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:23.018355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:23.018370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:23.018378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.780 [2024-12-09 05:25:23.018387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.780 [2024-12-09 05:25:23.018403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:23.028274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:23.028331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:23.028346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:23.028355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.780 [2024-12-09 05:25:23.028363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.780 [2024-12-09 05:25:23.028380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-12-09 05:25:23.038348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.780 [2024-12-09 05:25:23.038403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.780 [2024-12-09 05:25:23.038418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.780 [2024-12-09 05:25:23.038428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.781 [2024-12-09 05:25:23.038436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff960000b90 00:30:40.781 [2024-12-09 05:25:23.038453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-12-09 05:25:23.038578] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:40.781 A controller has encountered a failure and is being reset. 00:30:40.781 Controller properly reset. 00:30:40.781 Initializing NVMe Controllers 00:30:40.781 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:40.781 Initialization complete. Launching workers. 00:30:40.781 Starting thread on core 1 00:30:40.781 Starting thread on core 2 00:30:40.781 Starting thread on core 3 00:30:40.781 Starting thread on core 0 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:40.781 00:30:40.781 real 0m11.484s 00:30:40.781 user 0m21.369s 00:30:40.781 sys 0m5.134s 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:40.781 ************************************ 00:30:40.781 END TEST nvmf_target_disconnect_tc2 00:30:40.781 ************************************ 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.781 rmmod nvme_tcp 00:30:40.781 rmmod nvme_fabrics 00:30:40.781 rmmod nvme_keyring 00:30:40.781 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 663201 ']' 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 663201 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 663201 ']' 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 663201 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 663201 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 663201' 00:30:41.041 killing process with pid 663201 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 663201 00:30:41.041 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 663201 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.301 05:25:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.207 05:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.207 00:30:43.207 real 0m21.752s 00:30:43.207 user 0m49.541s 00:30:43.207 sys 0m11.329s 00:30:43.207 05:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.207 05:25:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:43.207 ************************************ 00:30:43.207 END TEST nvmf_target_disconnect 00:30:43.207 ************************************ 00:30:43.467 05:25:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:43.467 00:30:43.467 real 6m37.866s 00:30:43.467 user 11m28.841s 00:30:43.467 sys 2m28.374s 00:30:43.467 05:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.467 05:25:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.467 ************************************ 00:30:43.467 END TEST nvmf_host 00:30:43.467 ************************************ 00:30:43.467 05:25:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:43.467 05:25:25 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:43.467 05:25:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:43.467 05:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.467 05:25:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.467 05:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.467 ************************************ 00:30:43.467 START TEST nvmf_target_core_interrupt_mode 00:30:43.467 ************************************ 00:30:43.467 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:43.467 * Looking for test storage... 00:30:43.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:43.467 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.467 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.467 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.727 --rc genhtml_branch_coverage=1 00:30:43.727 --rc genhtml_function_coverage=1 00:30:43.727 --rc genhtml_legend=1 00:30:43.727 --rc geninfo_all_blocks=1 00:30:43.727 --rc geninfo_unexecuted_blocks=1 00:30:43.727 00:30:43.727 ' 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.727 --rc genhtml_branch_coverage=1 00:30:43.727 --rc genhtml_function_coverage=1 00:30:43.727 --rc genhtml_legend=1 00:30:43.727 --rc geninfo_all_blocks=1 00:30:43.727 --rc geninfo_unexecuted_blocks=1 00:30:43.727 00:30:43.727 ' 00:30:43.727 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.727 --rc genhtml_branch_coverage=1 00:30:43.727 --rc genhtml_function_coverage=1 00:30:43.727 --rc genhtml_legend=1 00:30:43.727 --rc geninfo_all_blocks=1 00:30:43.727 --rc geninfo_unexecuted_blocks=1 00:30:43.728 00:30:43.728 ' 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.728 --rc genhtml_branch_coverage=1 00:30:43.728 --rc genhtml_function_coverage=1 00:30:43.728 --rc genhtml_legend=1 00:30:43.728 --rc geninfo_all_blocks=1 00:30:43.728 --rc geninfo_unexecuted_blocks=1 00:30:43.728 00:30:43.728 ' 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.728 05:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.728 ************************************ 00:30:43.728 START TEST nvmf_abort 00:30:43.728 ************************************ 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:43.728 * Looking for test storage... 00:30:43.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.728 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.988 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.989 --rc genhtml_branch_coverage=1 00:30:43.989 --rc genhtml_function_coverage=1 00:30:43.989 --rc genhtml_legend=1 00:30:43.989 --rc geninfo_all_blocks=1 00:30:43.989 --rc geninfo_unexecuted_blocks=1 00:30:43.989 00:30:43.989 ' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.989 --rc genhtml_branch_coverage=1 00:30:43.989 --rc genhtml_function_coverage=1 00:30:43.989 --rc genhtml_legend=1 00:30:43.989 --rc geninfo_all_blocks=1 00:30:43.989 --rc geninfo_unexecuted_blocks=1 00:30:43.989 00:30:43.989 ' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.989 --rc genhtml_branch_coverage=1 00:30:43.989 --rc genhtml_function_coverage=1 00:30:43.989 --rc genhtml_legend=1 00:30:43.989 --rc geninfo_all_blocks=1 00:30:43.989 --rc geninfo_unexecuted_blocks=1 00:30:43.989 00:30:43.989 ' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.989 --rc genhtml_branch_coverage=1 00:30:43.989 --rc genhtml_function_coverage=1 00:30:43.989 --rc genhtml_legend=1 00:30:43.989 --rc geninfo_all_blocks=1 00:30:43.989 --rc geninfo_unexecuted_blocks=1 00:30:43.989 00:30:43.989 ' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.989 05:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:52.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:52.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:52.112 Found net devices under 0000:af:00.0: cvl_0_0 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:52.112 Found net devices under 0000:af:00.1: cvl_0_1 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.112 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:52.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:30:52.113 00:30:52.113 --- 10.0.0.2 ping statistics --- 00:30:52.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.113 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:30:52.113 00:30:52.113 --- 10.0.0.1 ping statistics --- 00:30:52.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.113 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=668068 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 668068 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 668068 ']' 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.113 05:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 [2024-12-09 05:25:33.581150] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:52.113 [2024-12-09 05:25:33.582169] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:30:52.113 [2024-12-09 05:25:33.582218] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.113 [2024-12-09 05:25:33.679820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:52.113 [2024-12-09 05:25:33.722066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.113 [2024-12-09 05:25:33.722102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.113 [2024-12-09 05:25:33.722112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.113 [2024-12-09 05:25:33.722120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.113 [2024-12-09 05:25:33.722127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.113 [2024-12-09 05:25:33.723700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.113 [2024-12-09 05:25:33.723810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.113 [2024-12-09 05:25:33.723811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.113 [2024-12-09 05:25:33.792081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:52.113 [2024-12-09 05:25:33.792847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:52.113 [2024-12-09 05:25:33.793045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:52.113 [2024-12-09 05:25:33.793183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 [2024-12-09 05:25:34.472750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 Malloc0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 Delay0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 [2024-12-09 05:25:34.560552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.113 05:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:52.373 [2024-12-09 05:25:34.658052] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:54.906 Initializing NVMe Controllers 00:30:54.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:54.906 controller IO queue size 128 less than required 00:30:54.906 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:54.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:54.906 Initialization complete. Launching workers. 00:30:54.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37126 00:30:54.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37183, failed to submit 66 00:30:54.906 success 37126, unsuccessful 57, failed 0 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.906 rmmod nvme_tcp 00:30:54.906 rmmod nvme_fabrics 00:30:54.906 rmmod nvme_keyring 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 668068 ']' 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 668068 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 668068 ']' 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 668068 00:30:54.906 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 668068 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 668068' 00:30:54.907 killing process with pid 668068 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 668068 00:30:54.907 05:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 668068 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.907 05:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.814 00:30:56.814 real 0m13.173s 00:30:56.814 user 0m10.475s 00:30:56.814 sys 0m7.281s 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:56.814 ************************************ 00:30:56.814 END TEST nvmf_abort 00:30:56.814 ************************************ 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.814 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.074 ************************************ 00:30:57.074 START TEST nvmf_ns_hotplug_stress 00:30:57.074 ************************************ 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:57.074 * Looking for test storage... 00:30:57.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.074 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:57.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.074 --rc genhtml_branch_coverage=1 00:30:57.074 --rc genhtml_function_coverage=1 00:30:57.074 --rc genhtml_legend=1 00:30:57.074 --rc geninfo_all_blocks=1 00:30:57.074 --rc geninfo_unexecuted_blocks=1 00:30:57.074 00:30:57.074 ' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.075 --rc genhtml_branch_coverage=1 00:30:57.075 --rc genhtml_function_coverage=1 00:30:57.075 --rc genhtml_legend=1 00:30:57.075 --rc geninfo_all_blocks=1 00:30:57.075 --rc geninfo_unexecuted_blocks=1 00:30:57.075 00:30:57.075 ' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.075 --rc genhtml_branch_coverage=1 00:30:57.075 --rc genhtml_function_coverage=1 00:30:57.075 --rc genhtml_legend=1 00:30:57.075 --rc geninfo_all_blocks=1 00:30:57.075 --rc geninfo_unexecuted_blocks=1 00:30:57.075 00:30:57.075 ' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:57.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.075 --rc genhtml_branch_coverage=1 00:30:57.075 --rc genhtml_function_coverage=1 00:30:57.075 --rc genhtml_legend=1 00:30:57.075 --rc geninfo_all_blocks=1 00:30:57.075 --rc geninfo_unexecuted_blocks=1 00:30:57.075 00:30:57.075 ' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.075 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.076 05:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:05.199 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:05.199 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:05.199 Found net devices under 0000:af:00.0: cvl_0_0 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.199 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:05.199 Found net devices under 0000:af:00.1: cvl_0_1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:31:05.200 00:31:05.200 --- 10.0.0.2 ping statistics --- 00:31:05.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.200 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:05.200 00:31:05.200 --- 10.0.0.1 ping statistics --- 00:31:05.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.200 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=672340 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 672340 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 672340 ']' 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.200 05:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:05.200 [2024-12-09 05:25:46.852710] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:05.200 [2024-12-09 05:25:46.853648] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:31:05.200 [2024-12-09 05:25:46.853683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.200 [2024-12-09 05:25:46.950346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:05.200 [2024-12-09 05:25:46.990967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.200 [2024-12-09 05:25:46.990998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.200 [2024-12-09 05:25:46.991008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.200 [2024-12-09 05:25:46.991016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.200 [2024-12-09 05:25:46.991023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.200 [2024-12-09 05:25:46.992520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.200 [2024-12-09 05:25:46.992558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.200 [2024-12-09 05:25:46.992559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.200 [2024-12-09 05:25:47.060728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:05.200 [2024-12-09 05:25:47.061397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:05.200 [2024-12-09 05:25:47.061671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:05.200 [2024-12-09 05:25:47.061801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:05.200 [2024-12-09 05:25:47.301553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:05.200 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.460 [2024-12-09 05:25:47.709933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.460 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:05.719 05:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:05.719 Malloc0 00:31:05.719 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:05.979 Delay0 00:31:05.979 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.239 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:06.498 NULL1 00:31:06.498 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:06.498 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=672782 00:31:06.498 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:06.498 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:06.498 05:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.876 Read completed with error (sct=0, sc=11) 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 05:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.876 05:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:07.876 05:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:08.134 true 00:31:08.134 05:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:08.134 05:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.079 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.079 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:09.079 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:09.337 true 00:31:09.337 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:09.337 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.596 05:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.854 05:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:09.854 05:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:09.854 true 00:31:09.855 05:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:09.855 05:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 05:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.248 05:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:11.248 05:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:11.506 true 00:31:11.506 05:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:11.506 05:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.452 05:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.452 05:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:12.452 05:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:12.711 true 00:31:12.711 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:12.711 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.969 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.969 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:12.969 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:13.227 true 00:31:13.227 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:13.227 05:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 05:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.613 05:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:14.613 05:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:14.872 true 00:31:14.872 05:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:14.872 05:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.812 05:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:15.812 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:15.812 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:16.071 true 00:31:16.071 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:16.071 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.332 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.332 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:16.332 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:16.593 true 00:31:16.593 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:16.593 05:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 05:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.974 05:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:17.974 05:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:18.234 true 00:31:18.234 05:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:18.234 05:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:19.172 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:19.172 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:19.172 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:19.432 true 00:31:19.432 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:19.432 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.432 05:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.691 05:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:19.691 05:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:19.950 true 00:31:19.950 05:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:19.950 05:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 05:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.332 05:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:21.332 05:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:21.592 true 00:31:21.592 05:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:21.592 05:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.531 05:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.531 05:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:22.531 05:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:22.802 true 00:31:22.802 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:22.802 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.802 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.063 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:23.063 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:23.322 true 00:31:23.322 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:23.322 05:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.258 05:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.516 05:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:24.516 05:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:24.774 true 00:31:24.774 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:24.774 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.034 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.034 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:25.034 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:25.292 true 00:31:25.292 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:25.292 05:26:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 05:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:26.668 05:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:26.668 05:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:26.927 true 00:31:26.927 05:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:26.927 05:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:27.864 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.865 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:27.865 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:28.124 true 00:31:28.124 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:28.124 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.383 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.643 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:28.643 05:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:28.643 true 00:31:28.643 05:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:28.643 05:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 05:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:30.024 05:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:30.024 05:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:30.282 true 00:31:30.282 05:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:30.282 05:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.219 05:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.219 05:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:31.219 05:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:31.484 true 00:31:31.484 05:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:31.484 05:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.744 05:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.003 05:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:32.003 05:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:32.003 true 00:31:32.003 05:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:32.003 05:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 05:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:33.390 05:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:33.390 05:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:33.649 true 00:31:33.649 05:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:33.649 05:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.586 05:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.586 05:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:34.586 05:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:34.845 true 00:31:34.845 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:34.845 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.845 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.104 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:35.104 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:35.364 true 00:31:35.364 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:35.364 05:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 05:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:36.559 05:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:36.559 05:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:36.818 true 00:31:36.818 05:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:36.818 05:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.756 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.756 Initializing NVMe Controllers 00:31:37.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:37.756 Controller IO queue size 128, less than required. 00:31:37.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.756 Controller IO queue size 128, less than required. 00:31:37.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:37.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:37.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:37.756 Initialization complete. Launching workers. 00:31:37.756 ======================================================== 00:31:37.756 Latency(us) 00:31:37.756 Device Information : IOPS MiB/s Average min max 00:31:37.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2262.07 1.10 41149.59 2790.98 1027128.17 00:31:37.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18779.93 9.17 6815.46 1946.80 358065.73 00:31:37.756 ======================================================== 00:31:37.756 Total : 21042.00 10.27 10506.46 1946.80 1027128.17 00:31:37.756 00:31:38.014 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:38.014 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:38.014 true 00:31:38.014 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 672782 00:31:38.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (672782) - No such process 00:31:38.014 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 672782 00:31:38.014 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.271 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:38.529 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:38.529 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:38.529 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:38.529 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:38.529 05:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:38.787 null0 00:31:38.787 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:38.787 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:38.788 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:38.788 null1 00:31:38.788 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:38.788 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:38.788 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:39.046 null2 00:31:39.047 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:39.047 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:39.047 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:39.305 null3 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:39.305 null4 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:39.305 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:39.565 null5 00:31:39.565 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:39.565 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:39.565 05:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:39.824 null6 00:31:39.824 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:39.824 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:39.824 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:39.824 null7 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 678278 678279 678282 678283 678285 678287 678289 678291 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.084 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:40.085 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:40.085 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.085 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:40.085 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:40.085 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.344 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.345 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:40.605 05:26:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:40.865 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:40.866 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:40.866 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:40.866 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.126 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:41.409 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.670 05:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:41.670 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:41.670 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:41.670 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:41.670 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:41.670 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:41.930 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:41.931 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:41.931 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:42.192 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:42.452 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.452 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.452 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:42.452 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.453 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:42.713 05:26:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:42.713 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:42.714 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:42.973 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.233 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.492 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:43.752 05:26:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:43.752 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.012 rmmod nvme_tcp 00:31:44.012 rmmod nvme_fabrics 00:31:44.012 rmmod nvme_keyring 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 672340 ']' 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 672340 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 672340 ']' 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 672340 00:31:44.012 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 672340 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 672340' 00:31:44.270 killing process with pid 672340 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 672340 00:31:44.270 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 672340 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.529 05:26:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.434 00:31:46.434 real 0m49.541s 00:31:46.434 user 2m53.846s 00:31:46.434 sys 0m26.920s 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:46.434 ************************************ 00:31:46.434 END TEST nvmf_ns_hotplug_stress 00:31:46.434 ************************************ 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.434 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:46.694 ************************************ 00:31:46.694 START TEST nvmf_delete_subsystem 00:31:46.694 ************************************ 00:31:46.694 05:26:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:46.694 * Looking for test storage... 00:31:46.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:46.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.694 --rc genhtml_branch_coverage=1 00:31:46.694 --rc genhtml_function_coverage=1 00:31:46.694 --rc genhtml_legend=1 00:31:46.694 --rc geninfo_all_blocks=1 00:31:46.694 --rc geninfo_unexecuted_blocks=1 00:31:46.694 00:31:46.694 ' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:46.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.694 --rc genhtml_branch_coverage=1 00:31:46.694 --rc genhtml_function_coverage=1 00:31:46.694 --rc genhtml_legend=1 00:31:46.694 --rc geninfo_all_blocks=1 00:31:46.694 --rc geninfo_unexecuted_blocks=1 00:31:46.694 00:31:46.694 ' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:46.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.694 --rc genhtml_branch_coverage=1 00:31:46.694 --rc genhtml_function_coverage=1 00:31:46.694 --rc genhtml_legend=1 00:31:46.694 --rc geninfo_all_blocks=1 00:31:46.694 --rc geninfo_unexecuted_blocks=1 00:31:46.694 00:31:46.694 ' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:46.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.694 --rc genhtml_branch_coverage=1 00:31:46.694 --rc genhtml_function_coverage=1 00:31:46.694 --rc genhtml_legend=1 00:31:46.694 --rc geninfo_all_blocks=1 00:31:46.694 --rc geninfo_unexecuted_blocks=1 00:31:46.694 00:31:46.694 ' 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.694 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.695 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.954 05:26:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:55.096 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:55.096 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:55.097 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:55.097 Found net devices under 0000:af:00.0: cvl_0_0 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:55.097 Found net devices under 0000:af:00.1: cvl_0_1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:31:55.097 00:31:55.097 --- 10.0.0.2 ping statistics --- 00:31:55.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.097 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:55.097 00:31:55.097 --- 10.0.0.1 ping statistics --- 00:31:55.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.097 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=682926 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 682926 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 682926 ']' 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.097 05:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.097 [2024-12-09 05:26:36.496248] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.097 [2024-12-09 05:26:36.497169] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:31:55.098 [2024-12-09 05:26:36.497204] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.098 [2024-12-09 05:26:36.594254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:55.098 [2024-12-09 05:26:36.634036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.098 [2024-12-09 05:26:36.634073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.098 [2024-12-09 05:26:36.634082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.098 [2024-12-09 05:26:36.634091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.098 [2024-12-09 05:26:36.634114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.098 [2024-12-09 05:26:36.635389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.098 [2024-12-09 05:26:36.635391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.098 [2024-12-09 05:26:36.703053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.098 [2024-12-09 05:26:36.703400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.098 [2024-12-09 05:26:36.703701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 [2024-12-09 05:26:37.380240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 [2024-12-09 05:26:37.412636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 NULL1 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 Delay0 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=683111 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:55.098 05:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:55.098 [2024-12-09 05:26:37.534104] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:57.017 05:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.017 05:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.017 05:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 [2024-12-09 05:26:39.718379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656900 is same with the state(6) to be set 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 Read completed with error (sct=0, sc=8) 00:31:57.277 starting I/O failed: -6 00:31:57.277 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 starting I/O failed: -6 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 Read completed with error (sct=0, sc=8) 00:31:57.278 Write completed with error (sct=0, sc=8) 00:31:57.278 [2024-12-09 05:26:39.719173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb50400d4d0 is same with the state(6) to be set 00:31:58.215 [2024-12-09 05:26:40.670168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656720 is same with the state(6) to be set 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 [2024-12-09 05:26:40.720452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb504000c40 is same with the state(6) to be set 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 [2024-12-09 05:26:40.720640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb50400d800 is same with the state(6) to be set 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 [2024-12-09 05:26:40.720757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x655410 is same with the state(6) to be set 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Write completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 Read completed with error (sct=0, sc=8) 00:31:58.475 [2024-12-09 05:26:40.721425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb50400d020 is same with the state(6) to be set 00:31:58.475 Initializing NVMe Controllers 00:31:58.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.475 Controller IO queue size 128, less than required. 00:31:58.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:58.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:58.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:58.475 Initialization complete. Launching workers. 00:31:58.475 ======================================================== 00:31:58.475 Latency(us) 00:31:58.475 Device Information : IOPS MiB/s Average min max 00:31:58.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 156.85 0.08 869358.28 259.49 1012530.91 00:31:58.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 187.12 0.09 951464.42 1423.24 1012578.75 00:31:58.475 ======================================================== 00:31:58.475 Total : 343.97 0.17 914024.97 259.49 1012578.75 00:31:58.475 00:31:58.475 [2024-12-09 05:26:40.722303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656720 (9): Bad file descriptor 00:31:58.475 05:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.475 05:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:58.475 05:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 683111 00:31:58.475 05:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:58.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 683111 00:31:59.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (683111) - No such process 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 683111 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 683111 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 683111 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:59.045 [2024-12-09 05:26:41.256527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=683740 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:31:59.045 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:59.045 [2024-12-09 05:26:41.348608] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:59.390 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:59.390 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:31:59.390 05:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:00.096 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:00.096 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:00.096 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:00.387 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:00.387 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:00.387 05:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:00.956 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:00.956 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:00.956 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:01.524 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:01.524 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:01.524 05:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:02.093 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:02.093 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:02.093 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:02.093 Initializing NVMe Controllers 00:32:02.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.093 Controller IO queue size 128, less than required. 00:32:02.093 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:02.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:02.093 Initialization complete. Launching workers. 00:32:02.093 ======================================================== 00:32:02.093 Latency(us) 00:32:02.093 Device Information : IOPS MiB/s Average min max 00:32:02.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002133.61 1000193.25 1005856.47 00:32:02.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004719.95 1000182.92 1041072.45 00:32:02.093 ======================================================== 00:32:02.093 Total : 256.00 0.12 1003426.78 1000182.92 1041072.45 00:32:02.093 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 683740 00:32:02.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (683740) - No such process 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 683740 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.353 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.613 rmmod nvme_tcp 00:32:02.613 rmmod nvme_fabrics 00:32:02.613 rmmod nvme_keyring 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 682926 ']' 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 682926 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 682926 ']' 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 682926 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682926 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682926' 00:32:02.614 killing process with pid 682926 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 682926 00:32:02.614 05:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 682926 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.874 05:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.778 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.778 00:32:04.778 real 0m18.305s 00:32:04.778 user 0m25.969s 00:32:04.778 sys 0m8.320s 00:32:04.778 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.778 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:04.778 ************************************ 00:32:04.778 END TEST nvmf_delete_subsystem 00:32:04.778 ************************************ 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.037 ************************************ 00:32:05.037 START TEST nvmf_host_management 00:32:05.037 ************************************ 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:05.037 * Looking for test storage... 00:32:05.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:05.037 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:05.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.297 --rc genhtml_branch_coverage=1 00:32:05.297 --rc genhtml_function_coverage=1 00:32:05.297 --rc genhtml_legend=1 00:32:05.297 --rc geninfo_all_blocks=1 00:32:05.297 --rc geninfo_unexecuted_blocks=1 00:32:05.297 00:32:05.297 ' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:05.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.297 --rc genhtml_branch_coverage=1 00:32:05.297 --rc genhtml_function_coverage=1 00:32:05.297 --rc genhtml_legend=1 00:32:05.297 --rc geninfo_all_blocks=1 00:32:05.297 --rc geninfo_unexecuted_blocks=1 00:32:05.297 00:32:05.297 ' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:05.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.297 --rc genhtml_branch_coverage=1 00:32:05.297 --rc genhtml_function_coverage=1 00:32:05.297 --rc genhtml_legend=1 00:32:05.297 --rc geninfo_all_blocks=1 00:32:05.297 --rc geninfo_unexecuted_blocks=1 00:32:05.297 00:32:05.297 ' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:05.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.297 --rc genhtml_branch_coverage=1 00:32:05.297 --rc genhtml_function_coverage=1 00:32:05.297 --rc genhtml_legend=1 00:32:05.297 --rc geninfo_all_blocks=1 00:32:05.297 --rc geninfo_unexecuted_blocks=1 00:32:05.297 00:32:05.297 ' 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:05.297 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.298 05:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.428 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:13.429 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:13.429 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:13.429 Found net devices under 0000:af:00.0: cvl_0_0 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:13.429 Found net devices under 0000:af:00.1: cvl_0_1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:32:13.429 00:32:13.429 --- 10.0.0.2 ping statistics --- 00:32:13.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.429 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:32:13.429 00:32:13.429 --- 10.0.0.1 ping statistics --- 00:32:13.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.429 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.429 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=687986 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 687986 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 687986 ']' 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.430 05:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 [2024-12-09 05:26:54.874267] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.430 [2024-12-09 05:26:54.875280] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:13.430 [2024-12-09 05:26:54.875322] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.430 [2024-12-09 05:26:54.956089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.430 [2024-12-09 05:26:54.997349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.430 [2024-12-09 05:26:54.997388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.430 [2024-12-09 05:26:54.997397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.430 [2024-12-09 05:26:54.997405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.430 [2024-12-09 05:26:54.997412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.430 [2024-12-09 05:26:54.999187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.430 [2024-12-09 05:26:54.999300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.430 [2024-12-09 05:26:54.999409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:13.430 [2024-12-09 05:26:54.999408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.430 [2024-12-09 05:26:55.068918] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.430 [2024-12-09 05:26:55.069572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.430 [2024-12-09 05:26:55.069782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:13.430 [2024-12-09 05:26:55.070046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:13.430 [2024-12-09 05:26:55.070097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 [2024-12-09 05:26:55.144334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 Malloc0 00:32:13.430 [2024-12-09 05:26:55.248488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=688137 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 688137 /var/tmp/bdevperf.sock 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 688137 ']' 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.430 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.430 { 00:32:13.430 "params": { 00:32:13.430 "name": "Nvme$subsystem", 00:32:13.430 "trtype": "$TEST_TRANSPORT", 00:32:13.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.431 "adrfam": "ipv4", 00:32:13.431 "trsvcid": "$NVMF_PORT", 00:32:13.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.431 "hdgst": ${hdgst:-false}, 00:32:13.431 "ddgst": ${ddgst:-false} 00:32:13.431 }, 00:32:13.431 "method": "bdev_nvme_attach_controller" 00:32:13.431 } 00:32:13.431 EOF 00:32:13.431 )") 00:32:13.431 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:13.431 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:13.431 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:13.431 05:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.431 "params": { 00:32:13.431 "name": "Nvme0", 00:32:13.431 "trtype": "tcp", 00:32:13.431 "traddr": "10.0.0.2", 00:32:13.431 "adrfam": "ipv4", 00:32:13.431 "trsvcid": "4420", 00:32:13.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.431 "hdgst": false, 00:32:13.431 "ddgst": false 00:32:13.431 }, 00:32:13.431 "method": "bdev_nvme_attach_controller" 00:32:13.431 }' 00:32:13.431 [2024-12-09 05:26:55.353847] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:13.431 [2024-12-09 05:26:55.353900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688137 ] 00:32:13.431 [2024-12-09 05:26:55.449115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.431 [2024-12-09 05:26:55.488758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.431 Running I/O for 10 seconds... 00:32:13.999 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.999 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:13.999 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:13.999 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.999 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.000 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:14.000 [2024-12-09 05:26:56.263185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.000 [2024-12-09 05:26:56.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.000 [2024-12-09 05:26:56.263249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.000 [2024-12-09 05:26:56.263268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:14.000 [2024-12-09 05:26:56.263286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd62ad0 is same with the state(6) to be set 00:32:14.000 [2024-12-09 05:26:56.263336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.000 [2024-12-09 05:26:56.263788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.000 [2024-12-09 05:26:56.263798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.263985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.263995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.001 [2024-12-09 05:26:56.264557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.001 [2024-12-09 05:26:56.264567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.002 [2024-12-09 05:26:56.264576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.002 [2024-12-09 05:26:56.264586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.002 [2024-12-09 05:26:56.264597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:14.002 [2024-12-09 05:26:56.265508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:14.002 task offset: 16384 on job bdev=Nvme0n1 fails 00:32:14.002 00:32:14.002 Latency(us) 00:32:14.002 [2024-12-09T04:26:56.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:14.002 Job: Nvme0n1 ended in about 0.58 seconds with error 00:32:14.002 Verification LBA range: start 0x0 length 0x400 00:32:14.002 Nvme0n1 : 0.58 1988.70 124.29 110.48 0.00 29869.56 1913.65 26319.26 00:32:14.002 [2024-12-09T04:26:56.472Z] =================================================================================================================== 00:32:14.002 [2024-12-09T04:26:56.472Z] Total : 1988.70 124.29 110.48 0.00 29869.56 1913.65 26319.26 00:32:14.002 [2024-12-09 05:26:56.267778] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:14.002 [2024-12-09 05:26:56.267800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd62ad0 (9): Bad file descriptor 00:32:14.002 [2024-12-09 05:26:56.270574] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:14.002 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.002 05:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 688137 00:32:14.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (688137) - No such process 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:14.938 { 00:32:14.938 "params": { 00:32:14.938 "name": "Nvme$subsystem", 00:32:14.938 "trtype": "$TEST_TRANSPORT", 00:32:14.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.938 "adrfam": "ipv4", 00:32:14.938 "trsvcid": "$NVMF_PORT", 00:32:14.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.938 "hdgst": ${hdgst:-false}, 00:32:14.938 "ddgst": ${ddgst:-false} 00:32:14.938 }, 00:32:14.938 "method": "bdev_nvme_attach_controller" 00:32:14.938 } 00:32:14.938 EOF 00:32:14.938 )") 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:14.938 05:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:14.938 "params": { 00:32:14.938 "name": "Nvme0", 00:32:14.938 "trtype": "tcp", 00:32:14.938 "traddr": "10.0.0.2", 00:32:14.938 "adrfam": "ipv4", 00:32:14.938 "trsvcid": "4420", 00:32:14.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.938 "hdgst": false, 00:32:14.938 "ddgst": false 00:32:14.938 }, 00:32:14.938 "method": "bdev_nvme_attach_controller" 00:32:14.938 }' 00:32:14.938 [2024-12-09 05:26:57.333255] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:14.938 [2024-12-09 05:26:57.333306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688566 ] 00:32:15.197 [2024-12-09 05:26:57.425862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.197 [2024-12-09 05:26:57.462839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.197 Running I/O for 1 seconds... 00:32:16.575 2048.00 IOPS, 128.00 MiB/s 00:32:16.576 Latency(us) 00:32:16.576 [2024-12-09T04:26:59.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.576 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:16.576 Verification LBA range: start 0x0 length 0x400 00:32:16.576 Nvme0n1 : 1.03 2057.72 128.61 0.00 0.00 30634.57 6081.74 26214.40 00:32:16.576 [2024-12-09T04:26:59.046Z] =================================================================================================================== 00:32:16.576 [2024-12-09T04:26:59.046Z] Total : 2057.72 128.61 0.00 0.00 30634.57 6081.74 26214.40 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.576 rmmod nvme_tcp 00:32:16.576 rmmod nvme_fabrics 00:32:16.576 rmmod nvme_keyring 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 687986 ']' 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 687986 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 687986 ']' 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 687986 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:16.576 05:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687986 00:32:16.576 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:16.576 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:16.576 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687986' 00:32:16.576 killing process with pid 687986 00:32:16.576 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 687986 00:32:16.576 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 687986 00:32:16.835 [2024-12-09 05:26:59.225461] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.835 05:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:19.376 00:32:19.376 real 0m14.017s 00:32:19.376 user 0m18.708s 00:32:19.376 sys 0m8.300s 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:19.376 ************************************ 00:32:19.376 END TEST nvmf_host_management 00:32:19.376 ************************************ 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.376 ************************************ 00:32:19.376 START TEST nvmf_lvol 00:32:19.376 ************************************ 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:19.376 * Looking for test storage... 00:32:19.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:19.376 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:19.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.377 --rc genhtml_branch_coverage=1 00:32:19.377 --rc genhtml_function_coverage=1 00:32:19.377 --rc genhtml_legend=1 00:32:19.377 --rc geninfo_all_blocks=1 00:32:19.377 --rc geninfo_unexecuted_blocks=1 00:32:19.377 00:32:19.377 ' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:19.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.377 --rc genhtml_branch_coverage=1 00:32:19.377 --rc genhtml_function_coverage=1 00:32:19.377 --rc genhtml_legend=1 00:32:19.377 --rc geninfo_all_blocks=1 00:32:19.377 --rc geninfo_unexecuted_blocks=1 00:32:19.377 00:32:19.377 ' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:19.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.377 --rc genhtml_branch_coverage=1 00:32:19.377 --rc genhtml_function_coverage=1 00:32:19.377 --rc genhtml_legend=1 00:32:19.377 --rc geninfo_all_blocks=1 00:32:19.377 --rc geninfo_unexecuted_blocks=1 00:32:19.377 00:32:19.377 ' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:19.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.377 --rc genhtml_branch_coverage=1 00:32:19.377 --rc genhtml_function_coverage=1 00:32:19.377 --rc genhtml_legend=1 00:32:19.377 --rc geninfo_all_blocks=1 00:32:19.377 --rc geninfo_unexecuted_blocks=1 00:32:19.377 00:32:19.377 ' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.377 05:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.504 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:27.505 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:27.505 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:27.505 Found net devices under 0000:af:00.0: cvl_0_0 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:27.505 Found net devices under 0000:af:00.1: cvl_0_1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:32:27.505 00:32:27.505 --- 10.0.0.2 ping statistics --- 00:32:27.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.505 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:27.505 00:32:27.505 --- 10.0.0.1 ping statistics --- 00:32:27.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.505 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:27.505 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=693092 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 693092 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 693092 ']' 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.506 05:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.506 [2024-12-09 05:27:09.048748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.506 [2024-12-09 05:27:09.049682] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:27.506 [2024-12-09 05:27:09.049718] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.506 [2024-12-09 05:27:09.148175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:27.506 [2024-12-09 05:27:09.189821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.506 [2024-12-09 05:27:09.189859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.506 [2024-12-09 05:27:09.189869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.506 [2024-12-09 05:27:09.189878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.506 [2024-12-09 05:27:09.189885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.506 [2024-12-09 05:27:09.191434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.506 [2024-12-09 05:27:09.191523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.506 [2024-12-09 05:27:09.191524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.506 [2024-12-09 05:27:09.260281] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.506 [2024-12-09 05:27:09.261079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:27.506 [2024-12-09 05:27:09.261426] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.506 [2024-12-09 05:27:09.261503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.506 05:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:27.765 [2024-12-09 05:27:10.120403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.765 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:28.024 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:28.024 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:28.283 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:28.283 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:28.542 05:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:28.801 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e599575-62ad-4162-91e9-97a85646127f 00:32:28.801 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e599575-62ad-4162-91e9-97a85646127f lvol 20 00:32:28.801 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0a1ba19e-82f2-4626-94a0-6ae9a93622f0 00:32:28.801 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:29.061 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a1ba19e-82f2-4626-94a0-6ae9a93622f0 00:32:29.320 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.320 [2024-12-09 05:27:11.724269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.320 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.580 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:29.580 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=693628 00:32:29.580 05:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:30.519 05:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0a1ba19e-82f2-4626-94a0-6ae9a93622f0 MY_SNAPSHOT 00:32:30.779 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=53effb0b-c68c-4b32-a2c4-f4c260add05e 00:32:30.779 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0a1ba19e-82f2-4626-94a0-6ae9a93622f0 30 00:32:31.038 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 53effb0b-c68c-4b32-a2c4-f4c260add05e MY_CLONE 00:32:31.298 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=595a3b69-63df-48fd-b8b6-5282af133b62 00:32:31.298 05:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 595a3b69-63df-48fd-b8b6-5282af133b62 00:32:31.867 05:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 693628 00:32:39.999 Initializing NVMe Controllers 00:32:39.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:39.999 Controller IO queue size 128, less than required. 00:32:39.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:39.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:39.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:39.999 Initialization complete. Launching workers. 00:32:39.999 ======================================================== 00:32:39.999 Latency(us) 00:32:39.999 Device Information : IOPS MiB/s Average min max 00:32:39.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12665.90 49.48 10107.36 1650.18 61645.56 00:32:39.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12523.90 48.92 10219.15 3400.73 59079.36 00:32:39.999 ======================================================== 00:32:39.999 Total : 25189.79 98.40 10162.94 1650.18 61645.56 00:32:39.999 00:32:39.999 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:40.257 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a1ba19e-82f2-4626-94a0-6ae9a93622f0 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e599575-62ad-4162-91e9-97a85646127f 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.517 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.517 rmmod nvme_tcp 00:32:40.517 rmmod nvme_fabrics 00:32:40.517 rmmod nvme_keyring 00:32:40.777 05:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 693092 ']' 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 693092 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 693092 ']' 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 693092 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693092 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693092' 00:32:40.777 killing process with pid 693092 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 693092 00:32:40.777 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 693092 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.037 05:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.944 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:42.944 00:32:42.944 real 0m23.981s 00:32:42.944 user 0m54.325s 00:32:42.944 sys 0m12.951s 00:32:42.944 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:42.944 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:42.944 ************************************ 00:32:42.944 END TEST nvmf_lvol 00:32:42.944 ************************************ 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:43.203 ************************************ 00:32:43.203 START TEST nvmf_lvs_grow 00:32:43.203 ************************************ 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:43.203 * Looking for test storage... 00:32:43.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.203 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:43.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.462 --rc genhtml_branch_coverage=1 00:32:43.462 --rc genhtml_function_coverage=1 00:32:43.462 --rc genhtml_legend=1 00:32:43.462 --rc geninfo_all_blocks=1 00:32:43.462 --rc geninfo_unexecuted_blocks=1 00:32:43.462 00:32:43.462 ' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:43.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.462 --rc genhtml_branch_coverage=1 00:32:43.462 --rc genhtml_function_coverage=1 00:32:43.462 --rc genhtml_legend=1 00:32:43.462 --rc geninfo_all_blocks=1 00:32:43.462 --rc geninfo_unexecuted_blocks=1 00:32:43.462 00:32:43.462 ' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:43.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.462 --rc genhtml_branch_coverage=1 00:32:43.462 --rc genhtml_function_coverage=1 00:32:43.462 --rc genhtml_legend=1 00:32:43.462 --rc geninfo_all_blocks=1 00:32:43.462 --rc geninfo_unexecuted_blocks=1 00:32:43.462 00:32:43.462 ' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:43.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.462 --rc genhtml_branch_coverage=1 00:32:43.462 --rc genhtml_function_coverage=1 00:32:43.462 --rc genhtml_legend=1 00:32:43.462 --rc geninfo_all_blocks=1 00:32:43.462 --rc geninfo_unexecuted_blocks=1 00:32:43.462 00:32:43.462 ' 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:43.462 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.463 05:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:51.596 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:51.597 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:51.597 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:51.597 Found net devices under 0000:af:00.0: cvl_0_0 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:51.597 Found net devices under 0000:af:00.1: cvl_0_1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.597 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:32:51.598 00:32:51.598 --- 10.0.0.2 ping statistics --- 00:32:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.598 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:32:51.598 00:32:51.598 --- 10.0.0.1 ping statistics --- 00:32:51.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.598 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.598 05:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=699095 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 699095 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 699095 ']' 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.598 [2024-12-09 05:27:33.070113] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.598 [2024-12-09 05:27:33.071078] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:51.598 [2024-12-09 05:27:33.071116] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.598 [2024-12-09 05:27:33.169292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.598 [2024-12-09 05:27:33.209721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.598 [2024-12-09 05:27:33.209755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.598 [2024-12-09 05:27:33.209767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.598 [2024-12-09 05:27:33.209775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.598 [2024-12-09 05:27:33.209782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.598 [2024-12-09 05:27:33.210340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.598 [2024-12-09 05:27:33.277671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.598 [2024-12-09 05:27:33.277914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.598 05:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:51.860 [2024-12-09 05:27:34.123048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.860 ************************************ 00:32:51.860 START TEST lvs_grow_clean 00:32:51.860 ************************************ 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:51.860 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:52.119 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:52.119 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:52.378 05:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c lvol 150 00:32:52.637 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a65514e-52f1-4688-a4ee-fe27b09056c5 00:32:52.637 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:52.637 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:52.896 [2024-12-09 05:27:35.194749] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:52.896 [2024-12-09 05:27:35.194881] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:52.896 true 00:32:52.896 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:52.896 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:32:53.155 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:53.155 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:53.155 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a65514e-52f1-4688-a4ee-fe27b09056c5 00:32:53.414 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:53.673 [2024-12-09 05:27:35.935314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.674 05:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=699613 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 699613 /var/tmp/bdevperf.sock 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 699613 ']' 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:53.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.933 05:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:53.933 [2024-12-09 05:27:36.196414] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:32:53.933 [2024-12-09 05:27:36.196473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699613 ] 00:32:53.933 [2024-12-09 05:27:36.286906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.933 [2024-12-09 05:27:36.326624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.870 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.870 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:54.870 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:55.128 Nvme0n1 00:32:55.128 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:55.388 [ 00:32:55.388 { 00:32:55.388 "name": "Nvme0n1", 00:32:55.388 "aliases": [ 00:32:55.388 "0a65514e-52f1-4688-a4ee-fe27b09056c5" 00:32:55.388 ], 00:32:55.388 "product_name": "NVMe disk", 00:32:55.388 "block_size": 4096, 00:32:55.388 "num_blocks": 38912, 00:32:55.388 "uuid": "0a65514e-52f1-4688-a4ee-fe27b09056c5", 00:32:55.388 "numa_id": 1, 00:32:55.388 "assigned_rate_limits": { 00:32:55.388 "rw_ios_per_sec": 0, 00:32:55.388 "rw_mbytes_per_sec": 0, 00:32:55.388 "r_mbytes_per_sec": 0, 00:32:55.388 "w_mbytes_per_sec": 0 00:32:55.388 }, 00:32:55.388 "claimed": false, 00:32:55.388 "zoned": false, 00:32:55.388 "supported_io_types": { 00:32:55.388 "read": true, 00:32:55.388 "write": true, 00:32:55.388 "unmap": true, 00:32:55.388 "flush": true, 00:32:55.388 "reset": true, 00:32:55.388 "nvme_admin": true, 00:32:55.388 "nvme_io": true, 00:32:55.388 "nvme_io_md": false, 00:32:55.388 "write_zeroes": true, 00:32:55.388 "zcopy": false, 00:32:55.388 "get_zone_info": false, 00:32:55.388 "zone_management": false, 00:32:55.388 "zone_append": false, 00:32:55.388 "compare": true, 00:32:55.388 "compare_and_write": true, 00:32:55.388 "abort": true, 00:32:55.388 "seek_hole": false, 00:32:55.388 "seek_data": false, 00:32:55.388 "copy": true, 00:32:55.388 "nvme_iov_md": false 00:32:55.388 }, 00:32:55.388 "memory_domains": [ 00:32:55.388 { 00:32:55.388 "dma_device_id": "system", 00:32:55.388 "dma_device_type": 1 00:32:55.388 } 00:32:55.388 ], 00:32:55.388 "driver_specific": { 00:32:55.388 "nvme": [ 00:32:55.388 { 00:32:55.388 "trid": { 00:32:55.388 "trtype": "TCP", 00:32:55.388 "adrfam": "IPv4", 00:32:55.388 "traddr": "10.0.0.2", 00:32:55.388 "trsvcid": "4420", 00:32:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:55.388 }, 00:32:55.388 "ctrlr_data": { 00:32:55.388 "cntlid": 1, 00:32:55.388 "vendor_id": "0x8086", 00:32:55.388 "model_number": "SPDK bdev Controller", 00:32:55.388 "serial_number": "SPDK0", 00:32:55.388 "firmware_revision": "25.01", 00:32:55.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.388 "oacs": { 00:32:55.388 "security": 0, 00:32:55.388 "format": 0, 00:32:55.388 "firmware": 0, 00:32:55.388 "ns_manage": 0 00:32:55.388 }, 00:32:55.388 "multi_ctrlr": true, 00:32:55.388 "ana_reporting": false 00:32:55.388 }, 00:32:55.388 "vs": { 00:32:55.388 "nvme_version": "1.3" 00:32:55.388 }, 00:32:55.388 "ns_data": { 00:32:55.388 "id": 1, 00:32:55.388 "can_share": true 00:32:55.388 } 00:32:55.388 } 00:32:55.388 ], 00:32:55.388 "mp_policy": "active_passive" 00:32:55.388 } 00:32:55.388 } 00:32:55.388 ] 00:32:55.388 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:55.388 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=699822 00:32:55.389 05:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:55.389 Running I/O for 10 seconds... 00:32:56.326 Latency(us) 00:32:56.326 [2024-12-09T04:27:38.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.326 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:32:56.326 [2024-12-09T04:27:38.796Z] =================================================================================================================== 00:32:56.326 [2024-12-09T04:27:38.796Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:32:56.326 00:32:57.265 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:32:57.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.265 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:32:57.265 [2024-12-09T04:27:39.735Z] =================================================================================================================== 00:32:57.265 [2024-12-09T04:27:39.735Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:32:57.265 00:32:57.525 true 00:32:57.525 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:32:57.525 05:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:57.784 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:57.785 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:57.785 05:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 699822 00:32:58.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.352 Nvme0n1 : 3.00 23215.67 90.69 0.00 0.00 0.00 0.00 0.00 00:32:58.352 [2024-12-09T04:27:40.822Z] =================================================================================================================== 00:32:58.352 [2024-12-09T04:27:40.822Z] Total : 23215.67 90.69 0.00 0.00 0.00 0.00 0.00 00:32:58.352 00:32:59.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.290 Nvme0n1 : 4.00 23349.00 91.21 0.00 0.00 0.00 0.00 0.00 00:32:59.290 [2024-12-09T04:27:41.760Z] =================================================================================================================== 00:32:59.290 [2024-12-09T04:27:41.760Z] Total : 23349.00 91.21 0.00 0.00 0.00 0.00 0.00 00:32:59.290 00:33:00.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.680 Nvme0n1 : 5.00 23454.40 91.62 0.00 0.00 0.00 0.00 0.00 00:33:00.680 [2024-12-09T04:27:43.150Z] =================================================================================================================== 00:33:00.680 [2024-12-09T04:27:43.150Z] Total : 23454.40 91.62 0.00 0.00 0.00 0.00 0.00 00:33:00.680 00:33:01.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.615 Nvme0n1 : 6.00 23524.67 91.89 0.00 0.00 0.00 0.00 0.00 00:33:01.615 [2024-12-09T04:27:44.086Z] =================================================================================================================== 00:33:01.616 [2024-12-09T04:27:44.086Z] Total : 23524.67 91.89 0.00 0.00 0.00 0.00 0.00 00:33:01.616 00:33:02.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.552 Nvme0n1 : 7.00 23574.86 92.09 0.00 0.00 0.00 0.00 0.00 00:33:02.552 [2024-12-09T04:27:45.022Z] =================================================================================================================== 00:33:02.552 [2024-12-09T04:27:45.022Z] Total : 23574.86 92.09 0.00 0.00 0.00 0.00 0.00 00:33:02.552 00:33:03.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.494 Nvme0n1 : 8.00 23628.38 92.30 0.00 0.00 0.00 0.00 0.00 00:33:03.494 [2024-12-09T04:27:45.964Z] =================================================================================================================== 00:33:03.494 [2024-12-09T04:27:45.964Z] Total : 23628.38 92.30 0.00 0.00 0.00 0.00 0.00 00:33:03.494 00:33:04.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.433 Nvme0n1 : 9.00 23663.00 92.43 0.00 0.00 0.00 0.00 0.00 00:33:04.433 [2024-12-09T04:27:46.903Z] =================================================================================================================== 00:33:04.433 [2024-12-09T04:27:46.903Z] Total : 23663.00 92.43 0.00 0.00 0.00 0.00 0.00 00:33:04.433 00:33:05.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.372 Nvme0n1 : 10.00 23697.00 92.57 0.00 0.00 0.00 0.00 0.00 00:33:05.372 [2024-12-09T04:27:47.842Z] =================================================================================================================== 00:33:05.372 [2024-12-09T04:27:47.842Z] Total : 23697.00 92.57 0.00 0.00 0.00 0.00 0.00 00:33:05.372 00:33:05.372 00:33:05.372 Latency(us) 00:33:05.372 [2024-12-09T04:27:47.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.372 Nvme0n1 : 10.01 23695.27 92.56 0.00 0.00 5399.31 3053.98 27262.98 00:33:05.372 [2024-12-09T04:27:47.842Z] =================================================================================================================== 00:33:05.372 [2024-12-09T04:27:47.842Z] Total : 23695.27 92.56 0.00 0.00 5399.31 3053.98 27262.98 00:33:05.372 { 00:33:05.372 "results": [ 00:33:05.372 { 00:33:05.372 "job": "Nvme0n1", 00:33:05.372 "core_mask": "0x2", 00:33:05.372 "workload": "randwrite", 00:33:05.372 "status": "finished", 00:33:05.372 "queue_depth": 128, 00:33:05.372 "io_size": 4096, 00:33:05.372 "runtime": 10.006131, 00:33:05.372 "iops": 23695.2724284741, 00:33:05.372 "mibps": 92.55965792372696, 00:33:05.372 "io_failed": 0, 00:33:05.372 "io_timeout": 0, 00:33:05.372 "avg_latency_us": 5399.31420368624, 00:33:05.372 "min_latency_us": 3053.9776, 00:33:05.372 "max_latency_us": 27262.976 00:33:05.372 } 00:33:05.372 ], 00:33:05.372 "core_count": 1 00:33:05.372 } 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 699613 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 699613 ']' 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 699613 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 699613 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 699613' 00:33:05.372 killing process with pid 699613 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 699613 00:33:05.372 Received shutdown signal, test time was about 10.000000 seconds 00:33:05.372 00:33:05.372 Latency(us) 00:33:05.372 [2024-12-09T04:27:47.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.372 [2024-12-09T04:27:47.842Z] =================================================================================================================== 00:33:05.372 [2024-12-09T04:27:47.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.372 05:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 699613 00:33:05.651 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:05.909 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:06.167 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:06.167 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:06.167 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:06.167 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:06.167 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:06.424 [2024-12-09 05:27:48.774841] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:06.424 05:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:06.682 request: 00:33:06.682 { 00:33:06.683 "uuid": "cde8336b-4e69-4a46-b49e-ffa7bd990a2c", 00:33:06.683 "method": "bdev_lvol_get_lvstores", 00:33:06.683 "req_id": 1 00:33:06.683 } 00:33:06.683 Got JSON-RPC error response 00:33:06.683 response: 00:33:06.683 { 00:33:06.683 "code": -19, 00:33:06.683 "message": "No such device" 00:33:06.683 } 00:33:06.683 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:06.683 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:06.683 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:06.683 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:06.683 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:06.941 aio_bdev 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a65514e-52f1-4688-a4ee-fe27b09056c5 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0a65514e-52f1-4688-a4ee-fe27b09056c5 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:06.941 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:07.200 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0a65514e-52f1-4688-a4ee-fe27b09056c5 -t 2000 00:33:07.200 [ 00:33:07.200 { 00:33:07.200 "name": "0a65514e-52f1-4688-a4ee-fe27b09056c5", 00:33:07.200 "aliases": [ 00:33:07.200 "lvs/lvol" 00:33:07.200 ], 00:33:07.200 "product_name": "Logical Volume", 00:33:07.200 "block_size": 4096, 00:33:07.200 "num_blocks": 38912, 00:33:07.200 "uuid": "0a65514e-52f1-4688-a4ee-fe27b09056c5", 00:33:07.200 "assigned_rate_limits": { 00:33:07.200 "rw_ios_per_sec": 0, 00:33:07.200 "rw_mbytes_per_sec": 0, 00:33:07.200 "r_mbytes_per_sec": 0, 00:33:07.200 "w_mbytes_per_sec": 0 00:33:07.200 }, 00:33:07.200 "claimed": false, 00:33:07.200 "zoned": false, 00:33:07.200 "supported_io_types": { 00:33:07.200 "read": true, 00:33:07.200 "write": true, 00:33:07.200 "unmap": true, 00:33:07.200 "flush": false, 00:33:07.200 "reset": true, 00:33:07.200 "nvme_admin": false, 00:33:07.200 "nvme_io": false, 00:33:07.200 "nvme_io_md": false, 00:33:07.200 "write_zeroes": true, 00:33:07.200 "zcopy": false, 00:33:07.200 "get_zone_info": false, 00:33:07.200 "zone_management": false, 00:33:07.200 "zone_append": false, 00:33:07.200 "compare": false, 00:33:07.200 "compare_and_write": false, 00:33:07.200 "abort": false, 00:33:07.200 "seek_hole": true, 00:33:07.200 "seek_data": true, 00:33:07.200 "copy": false, 00:33:07.200 "nvme_iov_md": false 00:33:07.200 }, 00:33:07.200 "driver_specific": { 00:33:07.200 "lvol": { 00:33:07.200 "lvol_store_uuid": "cde8336b-4e69-4a46-b49e-ffa7bd990a2c", 00:33:07.200 "base_bdev": "aio_bdev", 00:33:07.200 "thin_provision": false, 00:33:07.200 "num_allocated_clusters": 38, 00:33:07.200 "snapshot": false, 00:33:07.200 "clone": false, 00:33:07.200 "esnap_clone": false 00:33:07.200 } 00:33:07.200 } 00:33:07.200 } 00:33:07.200 ] 00:33:07.200 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:07.200 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:07.200 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:07.459 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:07.459 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:07.459 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:07.717 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:07.717 05:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0a65514e-52f1-4688-a4ee-fe27b09056c5 00:33:07.975 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cde8336b-4e69-4a46-b49e-ffa7bd990a2c 00:33:07.975 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.233 00:33:08.233 real 0m16.445s 00:33:08.233 user 0m15.601s 00:33:08.233 sys 0m2.003s 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:08.233 ************************************ 00:33:08.233 END TEST lvs_grow_clean 00:33:08.233 ************************************ 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.233 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:08.492 ************************************ 00:33:08.492 START TEST lvs_grow_dirty 00:33:08.492 ************************************ 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:08.492 05:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:08.751 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:08.751 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:08.751 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:09.009 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:09.009 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:09.010 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b4f208c-4a57-49e5-9c27-605dab70330f lvol 150 00:33:09.268 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:09.268 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:09.268 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:09.268 [2024-12-09 05:27:51.714770] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:09.268 [2024-12-09 05:27:51.714921] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:09.268 true 00:33:09.268 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:09.268 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:09.527 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:09.527 05:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:09.786 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:10.045 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.045 [2024-12-09 05:27:52.456789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.045 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=702446 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 702446 /var/tmp/bdevperf.sock 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 702446 ']' 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.320 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:10.320 [2024-12-09 05:27:52.674813] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:10.320 [2024-12-09 05:27:52.674865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702446 ] 00:33:10.320 [2024-12-09 05:27:52.767708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.579 [2024-12-09 05:27:52.807812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.579 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.579 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:10.579 05:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:10.837 Nvme0n1 00:33:10.837 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:11.096 [ 00:33:11.096 { 00:33:11.096 "name": "Nvme0n1", 00:33:11.096 "aliases": [ 00:33:11.096 "2e645b97-bc59-4cc0-a96e-4b9599292831" 00:33:11.096 ], 00:33:11.096 "product_name": "NVMe disk", 00:33:11.096 "block_size": 4096, 00:33:11.096 "num_blocks": 38912, 00:33:11.096 "uuid": "2e645b97-bc59-4cc0-a96e-4b9599292831", 00:33:11.096 "numa_id": 1, 00:33:11.096 "assigned_rate_limits": { 00:33:11.096 "rw_ios_per_sec": 0, 00:33:11.096 "rw_mbytes_per_sec": 0, 00:33:11.096 "r_mbytes_per_sec": 0, 00:33:11.096 "w_mbytes_per_sec": 0 00:33:11.096 }, 00:33:11.096 "claimed": false, 00:33:11.096 "zoned": false, 00:33:11.096 "supported_io_types": { 00:33:11.096 "read": true, 00:33:11.096 "write": true, 00:33:11.096 "unmap": true, 00:33:11.096 "flush": true, 00:33:11.096 "reset": true, 00:33:11.096 "nvme_admin": true, 00:33:11.096 "nvme_io": true, 00:33:11.096 "nvme_io_md": false, 00:33:11.096 "write_zeroes": true, 00:33:11.096 "zcopy": false, 00:33:11.096 "get_zone_info": false, 00:33:11.096 "zone_management": false, 00:33:11.096 "zone_append": false, 00:33:11.096 "compare": true, 00:33:11.096 "compare_and_write": true, 00:33:11.096 "abort": true, 00:33:11.096 "seek_hole": false, 00:33:11.096 "seek_data": false, 00:33:11.096 "copy": true, 00:33:11.096 "nvme_iov_md": false 00:33:11.096 }, 00:33:11.096 "memory_domains": [ 00:33:11.096 { 00:33:11.096 "dma_device_id": "system", 00:33:11.096 "dma_device_type": 1 00:33:11.096 } 00:33:11.096 ], 00:33:11.096 "driver_specific": { 00:33:11.096 "nvme": [ 00:33:11.096 { 00:33:11.096 "trid": { 00:33:11.096 "trtype": "TCP", 00:33:11.096 "adrfam": "IPv4", 00:33:11.096 "traddr": "10.0.0.2", 00:33:11.096 "trsvcid": "4420", 00:33:11.096 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:11.096 }, 00:33:11.096 "ctrlr_data": { 00:33:11.096 "cntlid": 1, 00:33:11.096 "vendor_id": "0x8086", 00:33:11.096 "model_number": "SPDK bdev Controller", 00:33:11.096 "serial_number": "SPDK0", 00:33:11.096 "firmware_revision": "25.01", 00:33:11.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.096 "oacs": { 00:33:11.096 "security": 0, 00:33:11.096 "format": 0, 00:33:11.096 "firmware": 0, 00:33:11.096 "ns_manage": 0 00:33:11.096 }, 00:33:11.096 "multi_ctrlr": true, 00:33:11.096 "ana_reporting": false 00:33:11.096 }, 00:33:11.096 "vs": { 00:33:11.096 "nvme_version": "1.3" 00:33:11.096 }, 00:33:11.096 "ns_data": { 00:33:11.096 "id": 1, 00:33:11.096 "can_share": true 00:33:11.096 } 00:33:11.096 } 00:33:11.096 ], 00:33:11.096 "mp_policy": "active_passive" 00:33:11.096 } 00:33:11.096 } 00:33:11.096 ] 00:33:11.096 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=702514 00:33:11.096 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:11.096 05:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:11.096 Running I/O for 10 seconds... 00:33:12.035 Latency(us) 00:33:12.035 [2024-12-09T04:27:54.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.035 Nvme0n1 : 1.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:33:12.035 [2024-12-09T04:27:54.505Z] =================================================================================================================== 00:33:12.035 [2024-12-09T04:27:54.505Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:33:12.035 00:33:12.973 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:12.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.973 Nvme0n1 : 2.00 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:33:12.973 [2024-12-09T04:27:55.443Z] =================================================================================================================== 00:33:12.973 [2024-12-09T04:27:55.443Z] Total : 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:33:12.973 00:33:13.231 true 00:33:13.231 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:13.231 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:13.490 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:13.490 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:13.490 05:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 702514 00:33:14.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.059 Nvme0n1 : 3.00 23469.00 91.68 0.00 0.00 0.00 0.00 0.00 00:33:14.059 [2024-12-09T04:27:56.529Z] =================================================================================================================== 00:33:14.059 [2024-12-09T04:27:56.529Z] Total : 23469.00 91.68 0.00 0.00 0.00 0.00 0.00 00:33:14.059 00:33:14.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.996 Nvme0n1 : 4.00 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:33:14.996 [2024-12-09T04:27:57.466Z] =================================================================================================================== 00:33:14.996 [2024-12-09T04:27:57.466Z] Total : 23570.75 92.07 0.00 0.00 0.00 0.00 0.00 00:33:14.996 00:33:16.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.377 Nvme0n1 : 5.00 23631.80 92.31 0.00 0.00 0.00 0.00 0.00 00:33:16.377 [2024-12-09T04:27:58.847Z] =================================================================================================================== 00:33:16.377 [2024-12-09T04:27:58.847Z] Total : 23631.80 92.31 0.00 0.00 0.00 0.00 0.00 00:33:16.377 00:33:17.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.315 Nvme0n1 : 6.00 23672.50 92.47 0.00 0.00 0.00 0.00 0.00 00:33:17.315 [2024-12-09T04:27:59.785Z] =================================================================================================================== 00:33:17.315 [2024-12-09T04:27:59.785Z] Total : 23672.50 92.47 0.00 0.00 0.00 0.00 0.00 00:33:17.315 00:33:18.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.253 Nvme0n1 : 7.00 23683.43 92.51 0.00 0.00 0.00 0.00 0.00 00:33:18.253 [2024-12-09T04:28:00.723Z] =================================================================================================================== 00:33:18.253 [2024-12-09T04:28:00.723Z] Total : 23683.43 92.51 0.00 0.00 0.00 0.00 0.00 00:33:18.253 00:33:19.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.189 Nvme0n1 : 8.00 23677.88 92.49 0.00 0.00 0.00 0.00 0.00 00:33:19.189 [2024-12-09T04:28:01.659Z] =================================================================================================================== 00:33:19.189 [2024-12-09T04:28:01.659Z] Total : 23677.88 92.49 0.00 0.00 0.00 0.00 0.00 00:33:19.189 00:33:20.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.259 Nvme0n1 : 9.00 23685.78 92.52 0.00 0.00 0.00 0.00 0.00 00:33:20.259 [2024-12-09T04:28:02.729Z] =================================================================================================================== 00:33:20.259 [2024-12-09T04:28:02.729Z] Total : 23685.78 92.52 0.00 0.00 0.00 0.00 0.00 00:33:20.259 00:33:21.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.207 Nvme0n1 : 10.00 23704.80 92.60 0.00 0.00 0.00 0.00 0.00 00:33:21.207 [2024-12-09T04:28:03.677Z] =================================================================================================================== 00:33:21.207 [2024-12-09T04:28:03.677Z] Total : 23704.80 92.60 0.00 0.00 0.00 0.00 0.00 00:33:21.207 00:33:21.207 00:33:21.207 Latency(us) 00:33:21.207 [2024-12-09T04:28:03.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.207 Nvme0n1 : 10.00 23706.65 92.60 0.00 0.00 5396.41 3106.41 25794.97 00:33:21.207 [2024-12-09T04:28:03.677Z] =================================================================================================================== 00:33:21.207 [2024-12-09T04:28:03.677Z] Total : 23706.65 92.60 0.00 0.00 5396.41 3106.41 25794.97 00:33:21.207 { 00:33:21.207 "results": [ 00:33:21.207 { 00:33:21.207 "job": "Nvme0n1", 00:33:21.207 "core_mask": "0x2", 00:33:21.207 "workload": "randwrite", 00:33:21.207 "status": "finished", 00:33:21.207 "queue_depth": 128, 00:33:21.207 "io_size": 4096, 00:33:21.207 "runtime": 10.004621, 00:33:21.207 "iops": 23706.645159271902, 00:33:21.207 "mibps": 92.60408265340587, 00:33:21.207 "io_failed": 0, 00:33:21.207 "io_timeout": 0, 00:33:21.207 "avg_latency_us": 5396.413283016831, 00:33:21.207 "min_latency_us": 3106.4064, 00:33:21.207 "max_latency_us": 25794.9696 00:33:21.207 } 00:33:21.207 ], 00:33:21.207 "core_count": 1 00:33:21.207 } 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 702446 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 702446 ']' 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 702446 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702446 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702446' 00:33:21.207 killing process with pid 702446 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 702446 00:33:21.207 Received shutdown signal, test time was about 10.000000 seconds 00:33:21.207 00:33:21.207 Latency(us) 00:33:21.207 [2024-12-09T04:28:03.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.207 [2024-12-09T04:28:03.677Z] =================================================================================================================== 00:33:21.207 [2024-12-09T04:28:03.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.207 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 702446 00:33:21.467 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:21.467 05:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:21.725 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:21.725 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 699095 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 699095 00:33:21.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 699095 Killed "${NVMF_APP[@]}" "$@" 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=704406 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 704406 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 704406 ']' 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.984 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:21.984 [2024-12-09 05:28:04.424516] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:21.984 [2024-12-09 05:28:04.425421] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:21.984 [2024-12-09 05:28:04.425455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.254 [2024-12-09 05:28:04.508154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.254 [2024-12-09 05:28:04.546200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.254 [2024-12-09 05:28:04.546242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.254 [2024-12-09 05:28:04.546251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.254 [2024-12-09 05:28:04.546260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.254 [2024-12-09 05:28:04.546283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.254 [2024-12-09 05:28:04.546880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.254 [2024-12-09 05:28:04.613881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:22.254 [2024-12-09 05:28:04.614120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.254 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:22.513 [2024-12-09 05:28:04.869148] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:22.513 [2024-12-09 05:28:04.869389] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:22.513 [2024-12-09 05:28:04.869484] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:22.513 05:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:22.772 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e645b97-bc59-4cc0-a96e-4b9599292831 -t 2000 00:33:23.030 [ 00:33:23.030 { 00:33:23.030 "name": "2e645b97-bc59-4cc0-a96e-4b9599292831", 00:33:23.030 "aliases": [ 00:33:23.030 "lvs/lvol" 00:33:23.030 ], 00:33:23.030 "product_name": "Logical Volume", 00:33:23.030 "block_size": 4096, 00:33:23.030 "num_blocks": 38912, 00:33:23.031 "uuid": "2e645b97-bc59-4cc0-a96e-4b9599292831", 00:33:23.031 "assigned_rate_limits": { 00:33:23.031 "rw_ios_per_sec": 0, 00:33:23.031 "rw_mbytes_per_sec": 0, 00:33:23.031 "r_mbytes_per_sec": 0, 00:33:23.031 "w_mbytes_per_sec": 0 00:33:23.031 }, 00:33:23.031 "claimed": false, 00:33:23.031 "zoned": false, 00:33:23.031 "supported_io_types": { 00:33:23.031 "read": true, 00:33:23.031 "write": true, 00:33:23.031 "unmap": true, 00:33:23.031 "flush": false, 00:33:23.031 "reset": true, 00:33:23.031 "nvme_admin": false, 00:33:23.031 "nvme_io": false, 00:33:23.031 "nvme_io_md": false, 00:33:23.031 "write_zeroes": true, 00:33:23.031 "zcopy": false, 00:33:23.031 "get_zone_info": false, 00:33:23.031 "zone_management": false, 00:33:23.031 "zone_append": false, 00:33:23.031 "compare": false, 00:33:23.031 "compare_and_write": false, 00:33:23.031 "abort": false, 00:33:23.031 "seek_hole": true, 00:33:23.031 "seek_data": true, 00:33:23.031 "copy": false, 00:33:23.031 "nvme_iov_md": false 00:33:23.031 }, 00:33:23.031 "driver_specific": { 00:33:23.031 "lvol": { 00:33:23.031 "lvol_store_uuid": "2b4f208c-4a57-49e5-9c27-605dab70330f", 00:33:23.031 "base_bdev": "aio_bdev", 00:33:23.031 "thin_provision": false, 00:33:23.031 "num_allocated_clusters": 38, 00:33:23.031 "snapshot": false, 00:33:23.031 "clone": false, 00:33:23.031 "esnap_clone": false 00:33:23.031 } 00:33:23.031 } 00:33:23.031 } 00:33:23.031 ] 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:23.031 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:23.288 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:23.288 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:23.547 [2024-12-09 05:28:05.863422] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:23.547 05:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:23.806 request: 00:33:23.806 { 00:33:23.806 "uuid": "2b4f208c-4a57-49e5-9c27-605dab70330f", 00:33:23.806 "method": "bdev_lvol_get_lvstores", 00:33:23.806 "req_id": 1 00:33:23.806 } 00:33:23.806 Got JSON-RPC error response 00:33:23.806 response: 00:33:23.806 { 00:33:23.806 "code": -19, 00:33:23.806 "message": "No such device" 00:33:23.806 } 00:33:23.806 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:23.806 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:23.806 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:23.806 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:23.806 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.065 aio_bdev 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:24.065 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e645b97-bc59-4cc0-a96e-4b9599292831 -t 2000 00:33:24.326 [ 00:33:24.326 { 00:33:24.326 "name": "2e645b97-bc59-4cc0-a96e-4b9599292831", 00:33:24.326 "aliases": [ 00:33:24.326 "lvs/lvol" 00:33:24.326 ], 00:33:24.326 "product_name": "Logical Volume", 00:33:24.326 "block_size": 4096, 00:33:24.326 "num_blocks": 38912, 00:33:24.326 "uuid": "2e645b97-bc59-4cc0-a96e-4b9599292831", 00:33:24.326 "assigned_rate_limits": { 00:33:24.326 "rw_ios_per_sec": 0, 00:33:24.326 "rw_mbytes_per_sec": 0, 00:33:24.326 "r_mbytes_per_sec": 0, 00:33:24.326 "w_mbytes_per_sec": 0 00:33:24.326 }, 00:33:24.326 "claimed": false, 00:33:24.326 "zoned": false, 00:33:24.326 "supported_io_types": { 00:33:24.326 "read": true, 00:33:24.326 "write": true, 00:33:24.326 "unmap": true, 00:33:24.326 "flush": false, 00:33:24.326 "reset": true, 00:33:24.326 "nvme_admin": false, 00:33:24.326 "nvme_io": false, 00:33:24.326 "nvme_io_md": false, 00:33:24.326 "write_zeroes": true, 00:33:24.326 "zcopy": false, 00:33:24.326 "get_zone_info": false, 00:33:24.326 "zone_management": false, 00:33:24.326 "zone_append": false, 00:33:24.326 "compare": false, 00:33:24.326 "compare_and_write": false, 00:33:24.326 "abort": false, 00:33:24.326 "seek_hole": true, 00:33:24.326 "seek_data": true, 00:33:24.326 "copy": false, 00:33:24.326 "nvme_iov_md": false 00:33:24.326 }, 00:33:24.326 "driver_specific": { 00:33:24.326 "lvol": { 00:33:24.326 "lvol_store_uuid": "2b4f208c-4a57-49e5-9c27-605dab70330f", 00:33:24.326 "base_bdev": "aio_bdev", 00:33:24.326 "thin_provision": false, 00:33:24.326 "num_allocated_clusters": 38, 00:33:24.326 "snapshot": false, 00:33:24.326 "clone": false, 00:33:24.326 "esnap_clone": false 00:33:24.326 } 00:33:24.326 } 00:33:24.326 } 00:33:24.326 ] 00:33:24.326 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:24.326 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:24.326 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:24.586 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:24.587 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:24.587 05:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:24.846 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:24.846 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e645b97-bc59-4cc0-a96e-4b9599292831 00:33:24.846 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b4f208c-4a57-49e5-9c27-605dab70330f 00:33:25.105 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:25.365 00:33:25.365 real 0m16.985s 00:33:25.365 user 0m33.660s 00:33:25.365 sys 0m4.629s 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:25.365 ************************************ 00:33:25.365 END TEST lvs_grow_dirty 00:33:25.365 ************************************ 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:25.365 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:25.366 nvmf_trace.0 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.366 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.366 rmmod nvme_tcp 00:33:25.625 rmmod nvme_fabrics 00:33:25.625 rmmod nvme_keyring 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 704406 ']' 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 704406 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 704406 ']' 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 704406 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704406 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704406' 00:33:25.625 killing process with pid 704406 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 704406 00:33:25.625 05:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 704406 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.885 05:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.794 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.794 00:33:27.794 real 0m44.756s 00:33:27.794 user 0m52.309s 00:33:27.794 sys 0m12.743s 00:33:27.794 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.794 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:27.794 ************************************ 00:33:27.794 END TEST nvmf_lvs_grow 00:33:27.794 ************************************ 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:28.053 ************************************ 00:33:28.053 START TEST nvmf_bdev_io_wait 00:33:28.053 ************************************ 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:28.053 * Looking for test storage... 00:33:28.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.053 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.054 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.313 --rc genhtml_branch_coverage=1 00:33:28.313 --rc genhtml_function_coverage=1 00:33:28.313 --rc genhtml_legend=1 00:33:28.313 --rc geninfo_all_blocks=1 00:33:28.313 --rc geninfo_unexecuted_blocks=1 00:33:28.313 00:33:28.313 ' 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.313 --rc genhtml_branch_coverage=1 00:33:28.313 --rc genhtml_function_coverage=1 00:33:28.313 --rc genhtml_legend=1 00:33:28.313 --rc geninfo_all_blocks=1 00:33:28.313 --rc geninfo_unexecuted_blocks=1 00:33:28.313 00:33:28.313 ' 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.313 --rc genhtml_branch_coverage=1 00:33:28.313 --rc genhtml_function_coverage=1 00:33:28.313 --rc genhtml_legend=1 00:33:28.313 --rc geninfo_all_blocks=1 00:33:28.313 --rc geninfo_unexecuted_blocks=1 00:33:28.313 00:33:28.313 ' 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:28.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.313 --rc genhtml_branch_coverage=1 00:33:28.313 --rc genhtml_function_coverage=1 00:33:28.313 --rc genhtml_legend=1 00:33:28.313 --rc geninfo_all_blocks=1 00:33:28.313 --rc geninfo_unexecuted_blocks=1 00:33:28.313 00:33:28.313 ' 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.313 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.314 05:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.435 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.436 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.436 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:33:36.436 00:33:36.436 --- 10.0.0.2 ping statistics --- 00:33:36.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.436 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:33:36.436 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:33:36.436 00:33:36.436 --- 10.0.0.1 ping statistics --- 00:33:36.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.436 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=708703 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 708703 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 708703 ']' 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.437 05:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 [2024-12-09 05:28:17.913531] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:36.437 [2024-12-09 05:28:17.914586] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:36.437 [2024-12-09 05:28:17.914628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.437 [2024-12-09 05:28:18.011321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.437 [2024-12-09 05:28:18.053768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.437 [2024-12-09 05:28:18.053807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.437 [2024-12-09 05:28:18.053816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.437 [2024-12-09 05:28:18.053824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.437 [2024-12-09 05:28:18.053847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.437 [2024-12-09 05:28:18.055447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.437 [2024-12-09 05:28:18.055601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.437 [2024-12-09 05:28:18.055694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.437 [2024-12-09 05:28:18.055695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:36.437 [2024-12-09 05:28:18.056074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 [2024-12-09 05:28:18.865871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:36.437 [2024-12-09 05:28:18.866085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:36.437 [2024-12-09 05:28:18.866608] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:36.437 [2024-12-09 05:28:18.866620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.437 [2024-12-09 05:28:18.876239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.437 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.698 Malloc0 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:36.698 [2024-12-09 05:28:18.948716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=708980 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=708983 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.698 { 00:33:36.698 "params": { 00:33:36.698 "name": "Nvme$subsystem", 00:33:36.698 "trtype": "$TEST_TRANSPORT", 00:33:36.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.698 "adrfam": "ipv4", 00:33:36.698 "trsvcid": "$NVMF_PORT", 00:33:36.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.698 "hdgst": ${hdgst:-false}, 00:33:36.698 "ddgst": ${ddgst:-false} 00:33:36.698 }, 00:33:36.698 "method": "bdev_nvme_attach_controller" 00:33:36.698 } 00:33:36.698 EOF 00:33:36.698 )") 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=708985 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.698 { 00:33:36.698 "params": { 00:33:36.698 "name": "Nvme$subsystem", 00:33:36.698 "trtype": "$TEST_TRANSPORT", 00:33:36.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.698 "adrfam": "ipv4", 00:33:36.698 "trsvcid": "$NVMF_PORT", 00:33:36.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.698 "hdgst": ${hdgst:-false}, 00:33:36.698 "ddgst": ${ddgst:-false} 00:33:36.698 }, 00:33:36.698 "method": "bdev_nvme_attach_controller" 00:33:36.698 } 00:33:36.698 EOF 00:33:36.698 )") 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=708989 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.698 { 00:33:36.698 "params": { 00:33:36.698 "name": "Nvme$subsystem", 00:33:36.698 "trtype": "$TEST_TRANSPORT", 00:33:36.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.698 "adrfam": "ipv4", 00:33:36.698 "trsvcid": "$NVMF_PORT", 00:33:36.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.698 "hdgst": ${hdgst:-false}, 00:33:36.698 "ddgst": ${ddgst:-false} 00:33:36.698 }, 00:33:36.698 "method": "bdev_nvme_attach_controller" 00:33:36.698 } 00:33:36.698 EOF 00:33:36.698 )") 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:36.698 { 00:33:36.698 "params": { 00:33:36.698 "name": "Nvme$subsystem", 00:33:36.698 "trtype": "$TEST_TRANSPORT", 00:33:36.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:36.698 "adrfam": "ipv4", 00:33:36.698 "trsvcid": "$NVMF_PORT", 00:33:36.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:36.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:36.698 "hdgst": ${hdgst:-false}, 00:33:36.698 "ddgst": ${ddgst:-false} 00:33:36.698 }, 00:33:36.698 "method": "bdev_nvme_attach_controller" 00:33:36.698 } 00:33:36.698 EOF 00:33:36.698 )") 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 708980 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:36.698 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.698 "params": { 00:33:36.698 "name": "Nvme1", 00:33:36.698 "trtype": "tcp", 00:33:36.698 "traddr": "10.0.0.2", 00:33:36.698 "adrfam": "ipv4", 00:33:36.698 "trsvcid": "4420", 00:33:36.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.699 "hdgst": false, 00:33:36.699 "ddgst": false 00:33:36.699 }, 00:33:36.699 "method": "bdev_nvme_attach_controller" 00:33:36.699 }' 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.699 "params": { 00:33:36.699 "name": "Nvme1", 00:33:36.699 "trtype": "tcp", 00:33:36.699 "traddr": "10.0.0.2", 00:33:36.699 "adrfam": "ipv4", 00:33:36.699 "trsvcid": "4420", 00:33:36.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.699 "hdgst": false, 00:33:36.699 "ddgst": false 00:33:36.699 }, 00:33:36.699 "method": "bdev_nvme_attach_controller" 00:33:36.699 }' 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.699 "params": { 00:33:36.699 "name": "Nvme1", 00:33:36.699 "trtype": "tcp", 00:33:36.699 "traddr": "10.0.0.2", 00:33:36.699 "adrfam": "ipv4", 00:33:36.699 "trsvcid": "4420", 00:33:36.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.699 "hdgst": false, 00:33:36.699 "ddgst": false 00:33:36.699 }, 00:33:36.699 "method": "bdev_nvme_attach_controller" 00:33:36.699 }' 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:36.699 05:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:36.699 "params": { 00:33:36.699 "name": "Nvme1", 00:33:36.699 "trtype": "tcp", 00:33:36.699 "traddr": "10.0.0.2", 00:33:36.699 "adrfam": "ipv4", 00:33:36.699 "trsvcid": "4420", 00:33:36.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:36.699 "hdgst": false, 00:33:36.699 "ddgst": false 00:33:36.699 }, 00:33:36.699 "method": "bdev_nvme_attach_controller" 00:33:36.699 }' 00:33:36.699 [2024-12-09 05:28:19.000275] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:36.699 [2024-12-09 05:28:19.000330] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:36.699 [2024-12-09 05:28:19.002160] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:36.699 [2024-12-09 05:28:19.002206] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:36.699 [2024-12-09 05:28:19.005507] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:36.699 [2024-12-09 05:28:19.005553] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:36.699 [2024-12-09 05:28:19.007096] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:36.699 [2024-12-09 05:28:19.007139] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:36.958 [2024-12-09 05:28:19.194712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.958 [2024-12-09 05:28:19.235926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:36.958 [2024-12-09 05:28:19.285446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.958 [2024-12-09 05:28:19.327230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:36.958 [2024-12-09 05:28:19.380923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.218 [2024-12-09 05:28:19.436027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:37.218 [2024-12-09 05:28:19.437314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.218 [2024-12-09 05:28:19.478055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:37.218 Running I/O for 1 seconds... 00:33:37.218 Running I/O for 1 seconds... 00:33:37.218 Running I/O for 1 seconds... 00:33:37.477 Running I/O for 1 seconds... 00:33:38.419 247336.00 IOPS, 966.16 MiB/s 00:33:38.419 Latency(us) 00:33:38.419 [2024-12-09T04:28:20.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.419 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:38.419 Nvme1n1 : 1.00 246973.29 964.74 0.00 0.00 515.69 217.91 1454.90 00:33:38.419 [2024-12-09T04:28:20.889Z] =================================================================================================================== 00:33:38.419 [2024-12-09T04:28:20.889Z] Total : 246973.29 964.74 0.00 0.00 515.69 217.91 1454.90 00:33:38.419 11722.00 IOPS, 45.79 MiB/s 00:33:38.419 Latency(us) 00:33:38.419 [2024-12-09T04:28:20.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.419 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:38.419 Nvme1n1 : 1.01 11784.43 46.03 0.00 0.00 10826.55 3538.94 14050.92 00:33:38.419 [2024-12-09T04:28:20.889Z] =================================================================================================================== 00:33:38.419 [2024-12-09T04:28:20.889Z] Total : 11784.43 46.03 0.00 0.00 10826.55 3538.94 14050.92 00:33:38.419 11962.00 IOPS, 46.73 MiB/s 00:33:38.419 Latency(us) 00:33:38.419 [2024-12-09T04:28:20.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.419 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:38.419 Nvme1n1 : 1.01 12033.74 47.01 0.00 0.00 10606.55 4194.30 13736.35 00:33:38.419 [2024-12-09T04:28:20.889Z] =================================================================================================================== 00:33:38.419 [2024-12-09T04:28:20.889Z] Total : 12033.74 47.01 0.00 0.00 10606.55 4194.30 13736.35 00:33:38.419 11442.00 IOPS, 44.70 MiB/s [2024-12-09T04:28:20.889Z] 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 708983 00:33:38.419 00:33:38.419 Latency(us) 00:33:38.419 [2024-12-09T04:28:20.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.419 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:38.419 Nvme1n1 : 1.01 11515.42 44.98 0.00 0.00 11085.01 3984.59 17301.50 00:33:38.419 [2024-12-09T04:28:20.889Z] =================================================================================================================== 00:33:38.419 [2024-12-09T04:28:20.889Z] Total : 11515.42 44.98 0.00 0.00 11085.01 3984.59 17301.50 00:33:38.419 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 708985 00:33:38.419 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 708989 00:33:38.419 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.419 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.419 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.676 rmmod nvme_tcp 00:33:38.676 rmmod nvme_fabrics 00:33:38.676 rmmod nvme_keyring 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 708703 ']' 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 708703 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 708703 ']' 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 708703 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.676 05:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708703 00:33:38.676 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.676 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.676 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708703' 00:33:38.676 killing process with pid 708703 00:33:38.676 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 708703 00:33:38.676 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 708703 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.935 05:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.838 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.097 00:33:41.097 real 0m12.983s 00:33:41.097 user 0m15.291s 00:33:41.097 sys 0m8.270s 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:41.097 ************************************ 00:33:41.097 END TEST nvmf_bdev_io_wait 00:33:41.097 ************************************ 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.097 ************************************ 00:33:41.097 START TEST nvmf_queue_depth 00:33:41.097 ************************************ 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:41.097 * Looking for test storage... 00:33:41.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:41.097 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.355 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.355 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.356 --rc genhtml_branch_coverage=1 00:33:41.356 --rc genhtml_function_coverage=1 00:33:41.356 --rc genhtml_legend=1 00:33:41.356 --rc geninfo_all_blocks=1 00:33:41.356 --rc geninfo_unexecuted_blocks=1 00:33:41.356 00:33:41.356 ' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.356 --rc genhtml_branch_coverage=1 00:33:41.356 --rc genhtml_function_coverage=1 00:33:41.356 --rc genhtml_legend=1 00:33:41.356 --rc geninfo_all_blocks=1 00:33:41.356 --rc geninfo_unexecuted_blocks=1 00:33:41.356 00:33:41.356 ' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.356 --rc genhtml_branch_coverage=1 00:33:41.356 --rc genhtml_function_coverage=1 00:33:41.356 --rc genhtml_legend=1 00:33:41.356 --rc geninfo_all_blocks=1 00:33:41.356 --rc geninfo_unexecuted_blocks=1 00:33:41.356 00:33:41.356 ' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.356 --rc genhtml_branch_coverage=1 00:33:41.356 --rc genhtml_function_coverage=1 00:33:41.356 --rc genhtml_legend=1 00:33:41.356 --rc geninfo_all_blocks=1 00:33:41.356 --rc geninfo_unexecuted_blocks=1 00:33:41.356 00:33:41.356 ' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.356 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.357 05:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.486 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:49.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:49.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:49.487 Found net devices under 0000:af:00.0: cvl_0_0 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:49.487 Found net devices under 0000:af:00.1: cvl_0_1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:33:49.487 00:33:49.487 --- 10.0.0.2 ping statistics --- 00:33:49.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.487 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:33:49.487 00:33:49.487 --- 10.0.0.1 ping statistics --- 00:33:49.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.487 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:33:49.487 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=712978 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 712978 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 712978 ']' 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.488 05:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 [2024-12-09 05:28:30.966084] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:49.488 [2024-12-09 05:28:30.967034] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:49.488 [2024-12-09 05:28:30.967070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.488 [2024-12-09 05:28:31.065094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.488 [2024-12-09 05:28:31.105417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.488 [2024-12-09 05:28:31.105453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.488 [2024-12-09 05:28:31.105463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.488 [2024-12-09 05:28:31.105472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.488 [2024-12-09 05:28:31.105480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.488 [2024-12-09 05:28:31.106026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.488 [2024-12-09 05:28:31.173041] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:49.488 [2024-12-09 05:28:31.173280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 [2024-12-09 05:28:31.850747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 Malloc0 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.488 [2024-12-09 05:28:31.926929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=713162 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 713162 /var/tmp/bdevperf.sock 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 713162 ']' 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:49.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.488 05:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:49.747 [2024-12-09 05:28:31.981094] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:33:49.747 [2024-12-09 05:28:31.981140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713162 ] 00:33:49.747 [2024-12-09 05:28:32.077095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.747 [2024-12-09 05:28:32.118195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.684 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.684 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:50.684 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:50.684 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.684 05:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:50.684 NVMe0n1 00:33:50.684 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.684 05:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:50.684 Running I/O for 10 seconds... 00:33:52.998 12272.00 IOPS, 47.94 MiB/s [2024-12-09T04:28:36.403Z] 12368.50 IOPS, 48.31 MiB/s [2024-12-09T04:28:37.340Z] 12621.00 IOPS, 49.30 MiB/s [2024-12-09T04:28:38.281Z] 12680.25 IOPS, 49.53 MiB/s [2024-12-09T04:28:39.219Z] 12693.00 IOPS, 49.58 MiB/s [2024-12-09T04:28:40.157Z] 12698.67 IOPS, 49.60 MiB/s [2024-12-09T04:28:41.539Z] 12716.71 IOPS, 49.67 MiB/s [2024-12-09T04:28:42.479Z] 12683.75 IOPS, 49.55 MiB/s [2024-12-09T04:28:43.419Z] 12739.44 IOPS, 49.76 MiB/s [2024-12-09T04:28:43.419Z] 12747.30 IOPS, 49.79 MiB/s 00:34:00.949 Latency(us) 00:34:00.949 [2024-12-09T04:28:43.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.949 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:00.949 Verification LBA range: start 0x0 length 0x4000 00:34:00.949 NVMe0n1 : 10.05 12773.93 49.90 0.00 0.00 79884.71 13946.06 53267.66 00:34:00.949 [2024-12-09T04:28:43.419Z] =================================================================================================================== 00:34:00.949 [2024-12-09T04:28:43.419Z] Total : 12773.93 49.90 0.00 0.00 79884.71 13946.06 53267.66 00:34:00.949 { 00:34:00.949 "results": [ 00:34:00.949 { 00:34:00.949 "job": "NVMe0n1", 00:34:00.949 "core_mask": "0x1", 00:34:00.949 "workload": "verify", 00:34:00.949 "status": "finished", 00:34:00.949 "verify_range": { 00:34:00.949 "start": 0, 00:34:00.949 "length": 16384 00:34:00.949 }, 00:34:00.949 "queue_depth": 1024, 00:34:00.949 "io_size": 4096, 00:34:00.949 "runtime": 10.052116, 00:34:00.949 "iops": 12773.927399962357, 00:34:00.949 "mibps": 49.898153906102955, 00:34:00.949 "io_failed": 0, 00:34:00.949 "io_timeout": 0, 00:34:00.949 "avg_latency_us": 79884.70810436354, 00:34:00.949 "min_latency_us": 13946.0608, 00:34:00.949 "max_latency_us": 53267.6608 00:34:00.949 } 00:34:00.949 ], 00:34:00.949 "core_count": 1 00:34:00.949 } 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 713162 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 713162 ']' 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 713162 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713162 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713162' 00:34:00.949 killing process with pid 713162 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 713162 00:34:00.949 Received shutdown signal, test time was about 10.000000 seconds 00:34:00.949 00:34:00.949 Latency(us) 00:34:00.949 [2024-12-09T04:28:43.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.949 [2024-12-09T04:28:43.419Z] =================================================================================================================== 00:34:00.949 [2024-12-09T04:28:43.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.949 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 713162 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.208 rmmod nvme_tcp 00:34:01.208 rmmod nvme_fabrics 00:34:01.208 rmmod nvme_keyring 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 712978 ']' 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 712978 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 712978 ']' 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 712978 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712978 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712978' 00:34:01.208 killing process with pid 712978 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 712978 00:34:01.208 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 712978 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.468 05:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.004 00:34:04.004 real 0m22.498s 00:34:04.004 user 0m24.290s 00:34:04.004 sys 0m8.077s 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 ************************************ 00:34:04.004 END TEST nvmf_queue_depth 00:34:04.004 ************************************ 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.004 ************************************ 00:34:04.004 START TEST nvmf_target_multipath 00:34:04.004 ************************************ 00:34:04.004 05:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:04.004 * Looking for test storage... 00:34:04.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.004 --rc genhtml_branch_coverage=1 00:34:04.004 --rc genhtml_function_coverage=1 00:34:04.004 --rc genhtml_legend=1 00:34:04.004 --rc geninfo_all_blocks=1 00:34:04.004 --rc geninfo_unexecuted_blocks=1 00:34:04.004 00:34:04.004 ' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.004 --rc genhtml_branch_coverage=1 00:34:04.004 --rc genhtml_function_coverage=1 00:34:04.004 --rc genhtml_legend=1 00:34:04.004 --rc geninfo_all_blocks=1 00:34:04.004 --rc geninfo_unexecuted_blocks=1 00:34:04.004 00:34:04.004 ' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.004 --rc genhtml_branch_coverage=1 00:34:04.004 --rc genhtml_function_coverage=1 00:34:04.004 --rc genhtml_legend=1 00:34:04.004 --rc geninfo_all_blocks=1 00:34:04.004 --rc geninfo_unexecuted_blocks=1 00:34:04.004 00:34:04.004 ' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.004 --rc genhtml_branch_coverage=1 00:34:04.004 --rc genhtml_function_coverage=1 00:34:04.004 --rc genhtml_legend=1 00:34:04.004 --rc geninfo_all_blocks=1 00:34:04.004 --rc geninfo_unexecuted_blocks=1 00:34:04.004 00:34:04.004 ' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.004 05:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.133 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:12.134 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:12.134 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:12.134 Found net devices under 0000:af:00.0: cvl_0_0 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:12.134 Found net devices under 0000:af:00.1: cvl_0_1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.134 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:34:12.134 00:34:12.134 --- 10.0.0.2 ping statistics --- 00:34:12.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.135 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:34:12.135 00:34:12.135 --- 10.0.0.1 ping statistics --- 00:34:12.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.135 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:12.135 only one NIC for nvmf test 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.135 rmmod nvme_tcp 00:34:12.135 rmmod nvme_fabrics 00:34:12.135 rmmod nvme_keyring 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.135 05:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.519 00:34:13.519 real 0m9.743s 00:34:13.519 user 0m2.167s 00:34:13.519 sys 0m5.627s 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:13.519 ************************************ 00:34:13.519 END TEST nvmf_target_multipath 00:34:13.519 ************************************ 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.519 ************************************ 00:34:13.519 START TEST nvmf_zcopy 00:34:13.519 ************************************ 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:13.519 * Looking for test storage... 00:34:13.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.519 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.779 05:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:13.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.779 --rc genhtml_branch_coverage=1 00:34:13.779 --rc genhtml_function_coverage=1 00:34:13.779 --rc genhtml_legend=1 00:34:13.779 --rc geninfo_all_blocks=1 00:34:13.779 --rc geninfo_unexecuted_blocks=1 00:34:13.779 00:34:13.779 ' 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:13.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.779 --rc genhtml_branch_coverage=1 00:34:13.779 --rc genhtml_function_coverage=1 00:34:13.779 --rc genhtml_legend=1 00:34:13.779 --rc geninfo_all_blocks=1 00:34:13.779 --rc geninfo_unexecuted_blocks=1 00:34:13.779 00:34:13.779 ' 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:13.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.779 --rc genhtml_branch_coverage=1 00:34:13.779 --rc genhtml_function_coverage=1 00:34:13.779 --rc genhtml_legend=1 00:34:13.779 --rc geninfo_all_blocks=1 00:34:13.779 --rc geninfo_unexecuted_blocks=1 00:34:13.779 00:34:13.779 ' 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:13.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.779 --rc genhtml_branch_coverage=1 00:34:13.779 --rc genhtml_function_coverage=1 00:34:13.779 --rc genhtml_legend=1 00:34:13.779 --rc geninfo_all_blocks=1 00:34:13.779 --rc geninfo_unexecuted_blocks=1 00:34:13.779 00:34:13.779 ' 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.779 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.780 05:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.907 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:21.908 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:21.908 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:21.908 Found net devices under 0000:af:00.0: cvl_0_0 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:21.908 Found net devices under 0000:af:00.1: cvl_0_1 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.908 05:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:34:21.908 00:34:21.908 --- 10.0.0.2 ping statistics --- 00:34:21.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.908 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:34:21.908 00:34:21.908 --- 10.0.0.1 ping statistics --- 00:34:21.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.908 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=722483 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 722483 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 722483 ']' 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:21.908 05:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 [2024-12-09 05:29:03.356895] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:21.908 [2024-12-09 05:29:03.357863] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:34:21.908 [2024-12-09 05:29:03.357900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.908 [2024-12-09 05:29:03.452941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.908 [2024-12-09 05:29:03.492533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.908 [2024-12-09 05:29:03.492571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.908 [2024-12-09 05:29:03.492581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.908 [2024-12-09 05:29:03.492590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.908 [2024-12-09 05:29:03.492597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.908 [2024-12-09 05:29:03.493174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.908 [2024-12-09 05:29:03.561066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:21.908 [2024-12-09 05:29:03.561305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 [2024-12-09 05:29:04.237902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 [2024-12-09 05:29:04.262160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 malloc0 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:21.908 { 00:34:21.908 "params": { 00:34:21.908 "name": "Nvme$subsystem", 00:34:21.908 "trtype": "$TEST_TRANSPORT", 00:34:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.908 "adrfam": "ipv4", 00:34:21.908 "trsvcid": "$NVMF_PORT", 00:34:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.908 "hdgst": ${hdgst:-false}, 00:34:21.908 "ddgst": ${ddgst:-false} 00:34:21.908 }, 00:34:21.908 "method": "bdev_nvme_attach_controller" 00:34:21.908 } 00:34:21.908 EOF 00:34:21.908 )") 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:21.908 05:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:21.908 "params": { 00:34:21.908 "name": "Nvme1", 00:34:21.908 "trtype": "tcp", 00:34:21.908 "traddr": "10.0.0.2", 00:34:21.908 "adrfam": "ipv4", 00:34:21.908 "trsvcid": "4420", 00:34:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:21.908 "hdgst": false, 00:34:21.908 "ddgst": false 00:34:21.908 }, 00:34:21.908 "method": "bdev_nvme_attach_controller" 00:34:21.908 }' 00:34:21.908 [2024-12-09 05:29:04.357949] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:34:21.908 [2024-12-09 05:29:04.357996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722544 ] 00:34:22.167 [2024-12-09 05:29:04.445835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.167 [2024-12-09 05:29:04.485223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.425 Running I/O for 10 seconds... 00:34:24.297 8418.00 IOPS, 65.77 MiB/s [2024-12-09T04:29:07.703Z] 8484.00 IOPS, 66.28 MiB/s [2024-12-09T04:29:09.081Z] 8504.00 IOPS, 66.44 MiB/s [2024-12-09T04:29:10.016Z] 8515.00 IOPS, 66.52 MiB/s [2024-12-09T04:29:10.951Z] 8521.00 IOPS, 66.57 MiB/s [2024-12-09T04:29:11.887Z] 8495.17 IOPS, 66.37 MiB/s [2024-12-09T04:29:12.826Z] 8502.43 IOPS, 66.43 MiB/s [2024-12-09T04:29:13.763Z] 8514.38 IOPS, 66.52 MiB/s [2024-12-09T04:29:15.144Z] 8518.33 IOPS, 66.55 MiB/s [2024-12-09T04:29:15.144Z] 8523.60 IOPS, 66.59 MiB/s 00:34:32.674 Latency(us) 00:34:32.674 [2024-12-09T04:29:15.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.674 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:32.674 Verification LBA range: start 0x0 length 0x1000 00:34:32.674 Nvme1n1 : 10.01 8527.57 66.62 0.00 0.00 14969.06 368.64 20761.80 00:34:32.674 [2024-12-09T04:29:15.144Z] =================================================================================================================== 00:34:32.674 [2024-12-09T04:29:15.144Z] Total : 8527.57 66.62 0.00 0.00 14969.06 368.64 20761.80 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=724387 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:32.674 { 00:34:32.674 "params": { 00:34:32.674 "name": "Nvme$subsystem", 00:34:32.674 "trtype": "$TEST_TRANSPORT", 00:34:32.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.674 "adrfam": "ipv4", 00:34:32.674 "trsvcid": "$NVMF_PORT", 00:34:32.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.674 "hdgst": ${hdgst:-false}, 00:34:32.674 "ddgst": ${ddgst:-false} 00:34:32.674 }, 00:34:32.674 "method": "bdev_nvme_attach_controller" 00:34:32.674 } 00:34:32.674 EOF 00:34:32.674 )") 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:32.674 [2024-12-09 05:29:14.917570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.917604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:32.674 05:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:32.674 "params": { 00:34:32.674 "name": "Nvme1", 00:34:32.674 "trtype": "tcp", 00:34:32.674 "traddr": "10.0.0.2", 00:34:32.674 "adrfam": "ipv4", 00:34:32.674 "trsvcid": "4420", 00:34:32.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:32.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:32.674 "hdgst": false, 00:34:32.674 "ddgst": false 00:34:32.674 }, 00:34:32.674 "method": "bdev_nvme_attach_controller" 00:34:32.674 }' 00:34:32.674 [2024-12-09 05:29:14.929517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.929532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:14.941512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.941524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:14.953514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.953525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:14.956530] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:34:32.674 [2024-12-09 05:29:14.956577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724387 ] 00:34:32.674 [2024-12-09 05:29:14.965512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.965524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:14.977513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.977524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:14.989513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:14.989524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.001515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.001526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.013513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.013524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.025523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.025540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.037513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.037525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.048525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.674 [2024-12-09 05:29:15.049513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.049524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.061516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.061531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.073513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.073525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.085515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.085528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.087991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.674 [2024-12-09 05:29:15.097526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.097544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.109527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.109546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.121520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.121537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.674 [2024-12-09 05:29:15.133518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.674 [2024-12-09 05:29:15.133532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.145518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.145531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.157514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.157527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.169514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.169526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.181531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.181555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.193520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.193536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.205520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.205537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.217518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.217534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.229521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.229540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.241518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.241535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 Running I/O for 5 seconds... 00:34:32.933 [2024-12-09 05:29:15.257012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.257032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.271123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.271143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.286380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.933 [2024-12-09 05:29:15.286401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.933 [2024-12-09 05:29:15.301840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.301860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.314730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.314750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.329846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.329866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.346065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.346084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.361289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.361309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.374273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.374293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.389655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.389675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:32.934 [2024-12-09 05:29:15.401404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:32.934 [2024-12-09 05:29:15.401424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.415511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.415530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.430342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.430365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.445643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.445664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.458964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.458984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.474225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.474246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.490028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.490048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.505115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.505136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.518935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.518955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.533379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.533399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.548835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.548855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.563164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.563184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.578401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.578421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.593902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.593922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.609035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.609056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.625650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.625670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.637269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.637290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.197 [2024-12-09 05:29:15.650973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.197 [2024-12-09 05:29:15.650993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.665937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.665957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.681706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.681726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.694255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.694276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.709614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.709638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.723847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.723868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.738546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.738566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.753907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.753927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.769561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.769581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.783139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.783159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.797662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.797684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.808556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.808577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.823155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.823176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.838322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.838344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.853270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.853292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.867017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.867040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.881945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.881966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.897136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.897157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.456 [2024-12-09 05:29:15.911342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.456 [2024-12-09 05:29:15.911362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:15.926430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:15.926451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:15.940971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:15.940991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:15.957362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:15.957383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:15.973330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:15.973351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:15.986938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:15.986966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:16.001927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:16.001948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:16.017377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:16.017398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.715 [2024-12-09 05:29:16.033726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.715 [2024-12-09 05:29:16.033748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.047281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.047302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.062839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.062860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.077462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.077484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.090340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.090361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.104979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.104999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.118701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.118721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.133434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.133456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.146593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.146614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.161820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.161840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.716 [2024-12-09 05:29:16.177097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.716 [2024-12-09 05:29:16.177117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.190501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.190520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.205215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.205235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.219050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.219071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.233943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.233963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 16630.00 IOPS, 129.92 MiB/s [2024-12-09T04:29:16.445Z] [2024-12-09 05:29:16.248573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.248594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.263744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.263766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.278538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.278558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.293750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.293770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.307133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.307153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.321848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.321868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.336939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.336960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.351451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.351471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.365925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.365945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.381412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.381433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.395705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.395725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.410639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.410661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.425806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.425827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.975 [2024-12-09 05:29:16.441429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:33.975 [2024-12-09 05:29:16.441449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.455086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.455106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.470540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.470561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.486033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.486055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.501306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.501328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.515237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.515258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.529816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.529836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.545386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.545406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.559575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.559595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.574558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.574578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.589071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.589091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.602627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.602648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.617488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.617508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.629632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.629652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.643142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.643163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.657959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.657979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.673903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.673923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.234 [2024-12-09 05:29:16.689274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.234 [2024-12-09 05:29:16.689294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.493 [2024-12-09 05:29:16.705938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.493 [2024-12-09 05:29:16.705958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.493 [2024-12-09 05:29:16.721187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.493 [2024-12-09 05:29:16.721214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.493 [2024-12-09 05:29:16.735831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.493 [2024-12-09 05:29:16.735851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.750709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.750729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.765690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.765710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.776820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.776841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.791561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.791581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.806877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.806898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.821484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.821504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.832817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.832838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.847501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.847522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.861908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.861928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.877713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.877733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.889959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.889979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.904856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.904876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.918466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.918486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.933669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.933690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.947134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.947155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.494 [2024-12-09 05:29:16.961721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.494 [2024-12-09 05:29:16.961742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:16.972224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:16.972244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:16.987585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:16.987606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.002247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.002266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.017489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.017508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.031589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.031609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.046306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.046326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.061971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.061992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.077237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.077262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.093410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.093430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.107530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.107551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.122625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.122646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.137729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.137749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.151336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.151356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.166031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.166052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.753 [2024-12-09 05:29:17.182188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.753 [2024-12-09 05:29:17.182215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.754 [2024-12-09 05:29:17.197428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.754 [2024-12-09 05:29:17.197448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:34.754 [2024-12-09 05:29:17.211679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:34.754 [2024-12-09 05:29:17.211701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.226761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.226787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.241959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.241980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 16662.00 IOPS, 130.17 MiB/s [2024-12-09T04:29:17.483Z] [2024-12-09 05:29:17.258023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.258044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.273894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.273915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.288585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.288605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.303842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.303862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.318542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.318562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.332978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.332998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.347092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.347112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.361703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.361728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.373817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.373836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.389163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.389184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.405120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.405140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.419643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.419663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.434865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.434885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.450195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.450224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.465536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.465559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.013 [2024-12-09 05:29:17.477321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.013 [2024-12-09 05:29:17.477343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.491698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.491720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.506456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.506477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.521086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.521107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.535183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.535205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.550078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.550099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.564612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.564633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.579165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.579186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.594235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.594255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.608976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.608997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.622367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.622388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.637660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.637686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.650083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.650103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.665033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.665054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.679870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.679892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.694470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.694491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.709191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.709219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.725674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.725695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.273 [2024-12-09 05:29:17.738542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.273 [2024-12-09 05:29:17.738563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.754274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.754295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.770145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.770166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.785269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.785290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.799656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.799678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.814351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.814372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.829288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.829310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.845026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.845047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.859319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.859339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.874019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.874040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.890236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.890256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.905711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.905732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.917839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.917859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.933240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.933260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.949767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.949787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.962778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.962799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.977937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.977958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.533 [2024-12-09 05:29:17.993917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.533 [2024-12-09 05:29:17.993938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.009886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.009907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.025617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.025638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.038483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.038503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.053689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.053711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.064732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.064757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.080006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.080027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.094498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.094519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.109945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.109966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.125218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.125240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.139563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.139585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.154515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.154535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.169804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.169824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.185458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.185479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.199328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.199360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.214426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.214446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.228959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.228979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 [2024-12-09 05:29:18.243523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.243543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:35.793 16628.67 IOPS, 129.91 MiB/s [2024-12-09T04:29:18.263Z] [2024-12-09 05:29:18.258361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:35.793 [2024-12-09 05:29:18.258382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.273588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.273609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.284782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.284803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.299088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.299108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.314129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.314150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.329731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.329753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.342146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.342166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.357606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.357627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.370332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.370352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.052 [2024-12-09 05:29:18.385545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.052 [2024-12-09 05:29:18.385565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.398372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.398392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.413728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.413749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.425225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.425262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.439785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.439806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.455001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.455021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.470059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.470080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.485752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.485772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.497685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.497706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.053 [2024-12-09 05:29:18.511304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.053 [2024-12-09 05:29:18.511324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.526065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.526085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.541689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.541708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.553940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.553960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.569015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.569037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.584858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.584879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.599094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.599114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.613831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.613851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.629200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.629226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.645549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.645570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.659439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.659459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.674255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.674275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.689350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.689371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.703474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.703494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.717931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.717951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.733454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.733478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.747734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.747754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.763155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.763175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.312 [2024-12-09 05:29:18.777746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.312 [2024-12-09 05:29:18.777767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.788414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.788434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.803251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.803273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.818067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.818088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.833339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.833359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.847458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.847479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.862797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.862817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.877572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.877592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.888860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.888880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.903351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.903381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.918428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.918448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.934363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.934382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.949579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.570 [2024-12-09 05:29:18.949598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.570 [2024-12-09 05:29:18.962186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:18.962206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.571 [2024-12-09 05:29:18.977143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:18.977163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.571 [2024-12-09 05:29:18.993163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:18.993183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.571 [2024-12-09 05:29:19.007387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:19.007411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.571 [2024-12-09 05:29:19.022314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:19.022334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.571 [2024-12-09 05:29:19.037478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.571 [2024-12-09 05:29:19.037499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.050410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.050429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.065757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.065777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.078330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.078350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.093234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.093254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.109502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.109524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.121837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.121859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.138144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.138166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.153593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.153613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.167733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.167754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.182333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.182353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.197856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.197877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.213853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.213872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.229120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.229141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.243573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.243594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 16602.00 IOPS, 129.70 MiB/s [2024-12-09T04:29:19.299Z] [2024-12-09 05:29:19.258545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.258565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.273612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.273633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:36.829 [2024-12-09 05:29:19.286798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:36.829 [2024-12-09 05:29:19.286824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.302203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.302231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.318240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.318261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.333365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.333386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.349728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.349749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.362110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.362130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.377518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.377539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.391617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.391638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.407005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.407027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.421776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.421797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.433299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.433320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.447823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.447844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.463194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.463222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.478370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.478391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.493360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.493382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.507919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.507940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.523112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.523133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.538176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.538196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.088 [2024-12-09 05:29:19.553176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.088 [2024-12-09 05:29:19.553197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.567142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.567163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.582184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.582204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.598051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.598072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.614414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.614435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.629401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.629423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.646020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.646040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.661478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.661499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.674548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.674569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.690358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.690378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.705137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.705157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.719381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.719402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.734085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.734105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.749439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.749460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.763146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.763165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.778071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.778090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.793521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.793542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.347 [2024-12-09 05:29:19.806614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.347 [2024-12-09 05:29:19.806634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.821698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.821718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.834893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.834913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.850518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.850539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.865452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.865473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.878475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.878494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.893183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.893202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.908943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.908963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.923248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.923268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.938387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.938408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.953057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.953077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.967187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.967213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.981789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.981809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:19.997667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:19.997687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:20.006190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:20.006225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:20.024739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:20.024761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:20.039641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:20.039662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:20.054451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:20.054471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.606 [2024-12-09 05:29:20.070343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.606 [2024-12-09 05:29:20.070365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.086239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.086260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.102197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.102225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.120618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.120640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.135661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.135682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.151238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.151259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.165991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.166011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.181767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.864 [2024-12-09 05:29:20.181788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.864 [2024-12-09 05:29:20.195122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.195142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.210399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.210419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.225959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.225979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.241521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.241541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.254288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.254317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 16550.20 IOPS, 129.30 MiB/s 00:34:37.865 Latency(us) 00:34:37.865 [2024-12-09T04:29:20.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.865 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:37.865 Nvme1n1 : 5.01 16555.16 129.34 0.00 0.00 7725.09 1952.97 13054.77 00:34:37.865 [2024-12-09T04:29:20.335Z] =================================================================================================================== 00:34:37.865 [2024-12-09T04:29:20.335Z] Total : 16555.16 129.34 0.00 0.00 7725.09 1952.97 13054.77 00:34:37.865 [2024-12-09 05:29:20.265521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.265540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.277521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.277538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.289531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.289546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.301527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.301546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.313522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.313536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:37.865 [2024-12-09 05:29:20.325519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:37.865 [2024-12-09 05:29:20.325533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.337516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.337537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.349524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.349538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.361515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.361528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.373516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.373528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.385515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.385527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.397519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.397531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.409514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.409526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.421513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.421523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.433514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.433525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.445512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.445523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 [2024-12-09 05:29:20.457513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:38.124 [2024-12-09 05:29:20.457524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (724387) - No such process 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 724387 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:38.124 delay0 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.124 05:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:38.124 [2024-12-09 05:29:20.585698] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:46.241 [2024-12-09 05:29:27.707417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ab6e0 is same with the state(6) to be set 00:34:46.241 Initializing NVMe Controllers 00:34:46.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:46.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:46.241 Initialization complete. Launching workers. 00:34:46.241 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 28477 00:34:46.241 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28603, failed to submit 111 00:34:46.241 success 28537, unsuccessful 66, failed 0 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:46.241 rmmod nvme_tcp 00:34:46.241 rmmod nvme_fabrics 00:34:46.241 rmmod nvme_keyring 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 722483 ']' 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 722483 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 722483 ']' 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 722483 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722483 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722483' 00:34:46.241 killing process with pid 722483 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 722483 00:34:46.241 05:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 722483 00:34:46.241 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:46.241 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.242 05:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.148 00:34:48.148 real 0m34.372s 00:34:48.148 user 0m41.614s 00:34:48.148 sys 0m15.503s 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:48.148 ************************************ 00:34:48.148 END TEST nvmf_zcopy 00:34:48.148 ************************************ 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:48.148 ************************************ 00:34:48.148 START TEST nvmf_nmic 00:34:48.148 ************************************ 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:48.148 * Looking for test storage... 00:34:48.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.148 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.149 --rc genhtml_branch_coverage=1 00:34:48.149 --rc genhtml_function_coverage=1 00:34:48.149 --rc genhtml_legend=1 00:34:48.149 --rc geninfo_all_blocks=1 00:34:48.149 --rc geninfo_unexecuted_blocks=1 00:34:48.149 00:34:48.149 ' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.149 --rc genhtml_branch_coverage=1 00:34:48.149 --rc genhtml_function_coverage=1 00:34:48.149 --rc genhtml_legend=1 00:34:48.149 --rc geninfo_all_blocks=1 00:34:48.149 --rc geninfo_unexecuted_blocks=1 00:34:48.149 00:34:48.149 ' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.149 --rc genhtml_branch_coverage=1 00:34:48.149 --rc genhtml_function_coverage=1 00:34:48.149 --rc genhtml_legend=1 00:34:48.149 --rc geninfo_all_blocks=1 00:34:48.149 --rc geninfo_unexecuted_blocks=1 00:34:48.149 00:34:48.149 ' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.149 --rc genhtml_branch_coverage=1 00:34:48.149 --rc genhtml_function_coverage=1 00:34:48.149 --rc genhtml_legend=1 00:34:48.149 --rc geninfo_all_blocks=1 00:34:48.149 --rc geninfo_unexecuted_blocks=1 00:34:48.149 00:34:48.149 ' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.149 05:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.938 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:55.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:55.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:55.198 Found net devices under 0000:af:00.0: cvl_0_0 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:55.198 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:55.199 Found net devices under 0000:af:00.1: cvl_0_1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:55.199 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:55.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:34:55.458 00:34:55.458 --- 10.0.0.2 ping statistics --- 00:34:55.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.458 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:34:55.458 00:34:55.458 --- 10.0.0.1 ping statistics --- 00:34:55.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.458 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:55.458 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=730200 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 730200 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 730200 ']' 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:55.459 05:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.459 [2024-12-09 05:29:37.827172] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:55.459 [2024-12-09 05:29:37.828187] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:34:55.459 [2024-12-09 05:29:37.828237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.459 [2024-12-09 05:29:37.909010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:55.719 [2024-12-09 05:29:37.950975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.719 [2024-12-09 05:29:37.951009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.719 [2024-12-09 05:29:37.951018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.719 [2024-12-09 05:29:37.951026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.719 [2024-12-09 05:29:37.951033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.719 [2024-12-09 05:29:37.952586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.719 [2024-12-09 05:29:37.952700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:55.719 [2024-12-09 05:29:37.952805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.719 [2024-12-09 05:29:37.952807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:55.719 [2024-12-09 05:29:38.021592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:55.719 [2024-12-09 05:29:38.022678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:55.719 [2024-12-09 05:29:38.022696] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:55.719 [2024-12-09 05:29:38.022922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:55.719 [2024-12-09 05:29:38.022978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.289 [2024-12-09 05:29:38.709586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.289 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 Malloc0 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 [2024-12-09 05:29:38.797861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:56.549 test case1: single bdev can't be used in multiple subsystems 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:56.549 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.550 [2024-12-09 05:29:38.829202] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:56.550 [2024-12-09 05:29:38.829229] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:56.550 [2024-12-09 05:29:38.829239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.550 request: 00:34:56.550 { 00:34:56.550 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:56.550 "namespace": { 00:34:56.550 "bdev_name": "Malloc0", 00:34:56.550 "no_auto_visible": false, 00:34:56.550 "hide_metadata": false 00:34:56.550 }, 00:34:56.550 "method": "nvmf_subsystem_add_ns", 00:34:56.550 "req_id": 1 00:34:56.550 } 00:34:56.550 Got JSON-RPC error response 00:34:56.550 response: 00:34:56.550 { 00:34:56.550 "code": -32602, 00:34:56.550 "message": "Invalid parameters" 00:34:56.550 } 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:56.550 Adding namespace failed - expected result. 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:56.550 test case2: host connect to nvmf target in multiple paths 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:56.550 [2024-12-09 05:29:38.845309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.550 05:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:56.809 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:57.069 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:57.069 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:57.069 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:57.069 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:57.069 05:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:58.976 05:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:58.976 [global] 00:34:58.976 thread=1 00:34:58.976 invalidate=1 00:34:58.976 rw=write 00:34:58.976 time_based=1 00:34:58.976 runtime=1 00:34:58.976 ioengine=libaio 00:34:58.976 direct=1 00:34:58.976 bs=4096 00:34:58.976 iodepth=1 00:34:58.976 norandommap=0 00:34:58.976 numjobs=1 00:34:58.976 00:34:58.976 verify_dump=1 00:34:58.976 verify_backlog=512 00:34:58.976 verify_state_save=0 00:34:58.976 do_verify=1 00:34:58.976 verify=crc32c-intel 00:34:58.976 [job0] 00:34:58.976 filename=/dev/nvme0n1 00:34:59.235 Could not set queue depth (nvme0n1) 00:34:59.492 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.492 fio-3.35 00:34:59.492 Starting 1 thread 00:35:00.427 00:35:00.428 job0: (groupid=0, jobs=1): err= 0: pid=730921: Mon Dec 9 05:29:42 2024 00:35:00.428 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:35:00.428 slat (nsec): min=11612, max=26772, avg=24949.45, stdev=3030.71 00:35:00.428 clat (usec): min=40872, max=42024, avg=41027.76, stdev=235.77 00:35:00.428 lat (usec): min=40898, max=42051, avg=41052.71, stdev=235.35 00:35:00.428 clat percentiles (usec): 00:35:00.428 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:00.428 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:00.428 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:00.428 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:00.428 | 99.99th=[42206] 00:35:00.428 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:35:00.428 slat (usec): min=11, max=23916, avg=59.66, stdev=1056.38 00:35:00.428 clat (usec): min=124, max=348, avg=136.13, stdev=14.17 00:35:00.428 lat (usec): min=137, max=24163, avg=195.79, stdev=1061.39 00:35:00.428 clat percentiles (usec): 00:35:00.428 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 131], 00:35:00.428 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 135], 00:35:00.428 | 70.00th=[ 137], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 147], 00:35:00.428 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 351], 99.95th=[ 351], 00:35:00.428 | 99.99th=[ 351] 00:35:00.428 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:00.428 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:00.428 lat (usec) : 250=95.69%, 500=0.19% 00:35:00.428 lat (msec) : 50=4.12% 00:35:00.428 cpu : usr=0.30%, sys=0.69%, ctx=536, majf=0, minf=1 00:35:00.428 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.428 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.428 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:00.428 00:35:00.428 Run status group 0 (all jobs): 00:35:00.428 READ: bw=87.2KiB/s (89.3kB/s), 87.2KiB/s-87.2KiB/s (89.3kB/s-89.3kB/s), io=88.0KiB (90.1kB), run=1009-1009msec 00:35:00.428 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:35:00.428 00:35:00.428 Disk stats (read/write): 00:35:00.428 nvme0n1: ios=45/512, merge=0/0, ticks=1765/68, in_queue=1833, util=98.20% 00:35:00.687 05:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:00.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:00.687 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:00.945 rmmod nvme_tcp 00:35:00.945 rmmod nvme_fabrics 00:35:00.945 rmmod nvme_keyring 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 730200 ']' 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 730200 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 730200 ']' 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 730200 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 730200 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:00.945 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:00.946 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 730200' 00:35:00.946 killing process with pid 730200 00:35:00.946 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 730200 00:35:00.946 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 730200 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.254 05:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.157 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.157 00:35:03.157 real 0m15.337s 00:35:03.157 user 0m26.839s 00:35:03.157 sys 0m8.027s 00:35:03.157 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.157 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:03.157 ************************************ 00:35:03.157 END TEST nvmf_nmic 00:35:03.157 ************************************ 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:03.415 ************************************ 00:35:03.415 START TEST nvmf_fio_target 00:35:03.415 ************************************ 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:03.415 * Looking for test storage... 00:35:03.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:03.415 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.674 --rc genhtml_branch_coverage=1 00:35:03.674 --rc genhtml_function_coverage=1 00:35:03.674 --rc genhtml_legend=1 00:35:03.674 --rc geninfo_all_blocks=1 00:35:03.674 --rc geninfo_unexecuted_blocks=1 00:35:03.674 00:35:03.674 ' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.674 --rc genhtml_branch_coverage=1 00:35:03.674 --rc genhtml_function_coverage=1 00:35:03.674 --rc genhtml_legend=1 00:35:03.674 --rc geninfo_all_blocks=1 00:35:03.674 --rc geninfo_unexecuted_blocks=1 00:35:03.674 00:35:03.674 ' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.674 --rc genhtml_branch_coverage=1 00:35:03.674 --rc genhtml_function_coverage=1 00:35:03.674 --rc genhtml_legend=1 00:35:03.674 --rc geninfo_all_blocks=1 00:35:03.674 --rc geninfo_unexecuted_blocks=1 00:35:03.674 00:35:03.674 ' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.674 --rc genhtml_branch_coverage=1 00:35:03.674 --rc genhtml_function_coverage=1 00:35:03.674 --rc genhtml_legend=1 00:35:03.674 --rc geninfo_all_blocks=1 00:35:03.674 --rc geninfo_unexecuted_blocks=1 00:35:03.674 00:35:03.674 ' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.674 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.675 05:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:11.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:11.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:11.832 Found net devices under 0000:af:00.0: cvl_0_0 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:11.832 Found net devices under 0000:af:00.1: cvl_0_1 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:11.832 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:11.833 05:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:11.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:11.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:35:11.833 00:35:11.833 --- 10.0.0.2 ping statistics --- 00:35:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.833 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:11.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:11.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:35:11.833 00:35:11.833 --- 10.0.0.1 ping statistics --- 00:35:11.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:11.833 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=734895 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 734895 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 734895 ']' 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:11.833 05:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.833 [2024-12-09 05:29:53.264282] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:11.833 [2024-12-09 05:29:53.265253] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:35:11.833 [2024-12-09 05:29:53.265293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.833 [2024-12-09 05:29:53.366542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.833 [2024-12-09 05:29:53.407620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.833 [2024-12-09 05:29:53.407661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.833 [2024-12-09 05:29:53.407671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.833 [2024-12-09 05:29:53.407679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.833 [2024-12-09 05:29:53.407686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.833 [2024-12-09 05:29:53.409434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.833 [2024-12-09 05:29:53.409471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.833 [2024-12-09 05:29:53.409580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.833 [2024-12-09 05:29:53.409581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.833 [2024-12-09 05:29:53.478959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:11.833 [2024-12-09 05:29:53.479607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:11.833 [2024-12-09 05:29:53.479624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:11.833 [2024-12-09 05:29:53.480143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:11.833 [2024-12-09 05:29:53.480170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:11.833 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:12.092 [2024-12-09 05:29:54.310344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.092 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:12.351 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:12.351 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:12.351 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:12.351 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:12.610 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:12.610 05:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:12.868 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:12.868 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:13.126 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:13.126 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:13.126 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:13.384 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:13.384 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:13.643 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:13.643 05:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:13.902 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:13.902 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:13.902 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:14.160 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:14.160 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:14.419 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.419 [2024-12-09 05:29:56.886253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.678 05:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:14.678 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:14.937 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:15.196 05:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:17.100 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:17.100 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:17.100 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:17.359 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:17.359 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:17.359 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:17.359 05:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:17.359 [global] 00:35:17.359 thread=1 00:35:17.359 invalidate=1 00:35:17.359 rw=write 00:35:17.359 time_based=1 00:35:17.359 runtime=1 00:35:17.359 ioengine=libaio 00:35:17.359 direct=1 00:35:17.359 bs=4096 00:35:17.359 iodepth=1 00:35:17.359 norandommap=0 00:35:17.359 numjobs=1 00:35:17.359 00:35:17.359 verify_dump=1 00:35:17.359 verify_backlog=512 00:35:17.359 verify_state_save=0 00:35:17.359 do_verify=1 00:35:17.359 verify=crc32c-intel 00:35:17.359 [job0] 00:35:17.359 filename=/dev/nvme0n1 00:35:17.359 [job1] 00:35:17.359 filename=/dev/nvme0n2 00:35:17.359 [job2] 00:35:17.359 filename=/dev/nvme0n3 00:35:17.359 [job3] 00:35:17.359 filename=/dev/nvme0n4 00:35:17.359 Could not set queue depth (nvme0n1) 00:35:17.359 Could not set queue depth (nvme0n2) 00:35:17.359 Could not set queue depth (nvme0n3) 00:35:17.359 Could not set queue depth (nvme0n4) 00:35:17.618 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.618 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.618 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.618 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.618 fio-3.35 00:35:17.618 Starting 4 threads 00:35:18.999 00:35:18.999 job0: (groupid=0, jobs=1): err= 0: pid=736173: Mon Dec 9 05:30:01 2024 00:35:18.999 read: IOPS=2050, BW=8204KiB/s (8401kB/s)(8212KiB/1001msec) 00:35:18.999 slat (nsec): min=9005, max=43529, avg=11410.09, stdev=3909.05 00:35:18.999 clat (usec): min=179, max=610, avg=238.61, stdev=29.31 00:35:18.999 lat (usec): min=190, max=620, avg=250.02, stdev=29.37 00:35:18.999 clat percentiles (usec): 00:35:18.999 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 223], 00:35:18.999 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:35:18.999 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:35:18.999 | 99.00th=[ 371], 99.50th=[ 424], 99.90th=[ 498], 99.95th=[ 570], 00:35:18.999 | 99.99th=[ 611] 00:35:18.999 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:35:18.999 slat (nsec): min=7755, max=71147, avg=14245.92, stdev=2705.81 00:35:18.999 clat (usec): min=119, max=434, avg=170.18, stdev=29.65 00:35:18.999 lat (usec): min=133, max=450, avg=184.42, stdev=29.75 00:35:18.999 clat percentiles (usec): 00:35:18.999 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:35:18.999 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 172], 00:35:18.999 | 70.00th=[ 180], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 223], 00:35:18.999 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 375], 00:35:18.999 | 99.99th=[ 437] 00:35:19.000 bw ( KiB/s): min=10224, max=10224, per=43.05%, avg=10224.00, stdev= 0.00, samples=1 00:35:19.000 iops : min= 2556, max= 2556, avg=2556.00, stdev= 0.00, samples=1 00:35:19.000 lat (usec) : 250=91.39%, 500=8.56%, 750=0.04% 00:35:19.000 cpu : usr=4.10%, sys=7.20%, ctx=4615, majf=0, minf=1 00:35:19.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 issued rwts: total=2053,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:19.000 job1: (groupid=0, jobs=1): err= 0: pid=736175: Mon Dec 9 05:30:01 2024 00:35:19.000 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:35:19.000 slat (nsec): min=10922, max=26381, avg=24754.50, stdev=3366.36 00:35:19.000 clat (usec): min=40753, max=41942, avg=41048.66, stdev=293.97 00:35:19.000 lat (usec): min=40764, max=41968, avg=41073.41, stdev=294.83 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:19.000 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:19.000 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:19.000 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:19.000 | 99.99th=[41681] 00:35:19.000 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:35:19.000 slat (nsec): min=11460, max=45839, avg=12648.37, stdev=2015.18 00:35:19.000 clat (usec): min=127, max=459, avg=217.02, stdev=22.11 00:35:19.000 lat (usec): min=139, max=471, avg=229.67, stdev=22.31 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[ 151], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 206], 00:35:19.000 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:35:19.000 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 251], 00:35:19.000 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 461], 99.95th=[ 461], 00:35:19.000 | 99.99th=[ 461] 00:35:19.000 bw ( KiB/s): min= 4096, max= 4096, per=17.25%, avg=4096.00, stdev= 0.00, samples=1 00:35:19.000 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:19.000 lat (usec) : 250=90.64%, 500=5.24% 00:35:19.000 lat (msec) : 50=4.12% 00:35:19.000 cpu : usr=0.49%, sys=0.59%, ctx=535, majf=0, minf=1 00:35:19.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:19.000 job2: (groupid=0, jobs=1): err= 0: pid=736186: Mon Dec 9 05:30:01 2024 00:35:19.000 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:35:19.000 slat (nsec): min=8363, max=31107, avg=9081.12, stdev=1185.66 00:35:19.000 clat (usec): min=184, max=3895, avg=249.82, stdev=92.83 00:35:19.000 lat (usec): min=199, max=3905, avg=258.90, stdev=92.87 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:35:19.000 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:35:19.000 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 289], 00:35:19.000 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 570], 99.95th=[ 594], 00:35:19.000 | 99.99th=[ 3884] 00:35:19.000 write: IOPS=2505, BW=9.79MiB/s (10.3MB/s)(9.80MiB/1001msec); 0 zone resets 00:35:19.000 slat (nsec): min=11054, max=44547, avg=12409.94, stdev=1636.61 00:35:19.000 clat (usec): min=130, max=1383, avg=170.83, stdev=39.41 00:35:19.000 lat (usec): min=143, max=1395, avg=183.24, stdev=39.50 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:35:19.000 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:35:19.000 | 70.00th=[ 169], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 235], 00:35:19.000 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 347], 99.95th=[ 437], 00:35:19.000 | 99.99th=[ 1385] 00:35:19.000 bw ( KiB/s): min=10200, max=10200, per=42.95%, avg=10200.00, stdev= 0.00, samples=1 00:35:19.000 iops : min= 2550, max= 2550, avg=2550.00, stdev= 0.00, samples=1 00:35:19.000 lat (usec) : 250=87.75%, 500=11.96%, 750=0.24% 00:35:19.000 lat (msec) : 2=0.02%, 4=0.02% 00:35:19.000 cpu : usr=2.80%, sys=5.40%, ctx=4557, majf=0, minf=2 00:35:19.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 issued rwts: total=2048,2508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:19.000 job3: (groupid=0, jobs=1): err= 0: pid=736192: Mon Dec 9 05:30:01 2024 00:35:19.000 read: IOPS=132, BW=531KiB/s (544kB/s)(532KiB/1002msec) 00:35:19.000 slat (nsec): min=9226, max=28581, avg=12373.69, stdev=5889.33 00:35:19.000 clat (usec): min=220, max=41385, avg=6675.06, stdev=14884.83 00:35:19.000 lat (usec): min=229, max=41397, avg=6687.43, stdev=14889.28 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:35:19.000 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:35:19.000 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[41157], 95.00th=[41157], 00:35:19.000 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:19.000 | 99.99th=[41157] 00:35:19.000 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:35:19.000 slat (nsec): min=6046, max=45772, avg=14341.47, stdev=3112.84 00:35:19.000 clat (usec): min=149, max=345, avg=194.78, stdev=19.37 00:35:19.000 lat (usec): min=167, max=384, avg=209.12, stdev=19.92 00:35:19.000 clat percentiles (usec): 00:35:19.000 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:35:19.000 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:35:19.000 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:35:19.000 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 347], 00:35:19.000 | 99.99th=[ 347] 00:35:19.000 bw ( KiB/s): min= 4096, max= 4096, per=17.25%, avg=4096.00, stdev= 0.00, samples=1 00:35:19.000 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:19.000 lat (usec) : 250=87.75%, 500=8.84%, 750=0.16% 00:35:19.000 lat (msec) : 50=3.26% 00:35:19.000 cpu : usr=0.70%, sys=1.10%, ctx=648, majf=0, minf=1 00:35:19.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:19.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:19.000 issued rwts: total=133,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:19.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:19.000 00:35:19.000 Run status group 0 (all jobs): 00:35:19.000 READ: bw=16.2MiB/s (17.0MB/s), 85.8KiB/s-8204KiB/s (87.8kB/s-8401kB/s), io=16.6MiB (17.4MB), run=1001-1026msec 00:35:19.000 WRITE: bw=23.2MiB/s (24.3MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=23.8MiB (25.0MB), run=1001-1026msec 00:35:19.000 00:35:19.000 Disk stats (read/write): 00:35:19.000 nvme0n1: ios=1797/2048, merge=0/0, ticks=731/324, in_queue=1055, util=99.10% 00:35:19.000 nvme0n2: ios=68/512, merge=0/0, ticks=1120/104, in_queue=1224, util=99.28% 00:35:19.000 nvme0n3: ios=1673/2048, merge=0/0, ticks=398/339, in_queue=737, util=88.02% 00:35:19.000 nvme0n4: ios=151/512, merge=0/0, ticks=1630/92, in_queue=1722, util=99.13% 00:35:19.000 05:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:19.000 [global] 00:35:19.000 thread=1 00:35:19.000 invalidate=1 00:35:19.000 rw=randwrite 00:35:19.000 time_based=1 00:35:19.000 runtime=1 00:35:19.000 ioengine=libaio 00:35:19.000 direct=1 00:35:19.000 bs=4096 00:35:19.000 iodepth=1 00:35:19.000 norandommap=0 00:35:19.000 numjobs=1 00:35:19.000 00:35:19.000 verify_dump=1 00:35:19.000 verify_backlog=512 00:35:19.000 verify_state_save=0 00:35:19.000 do_verify=1 00:35:19.000 verify=crc32c-intel 00:35:19.000 [job0] 00:35:19.000 filename=/dev/nvme0n1 00:35:19.000 [job1] 00:35:19.000 filename=/dev/nvme0n2 00:35:19.000 [job2] 00:35:19.000 filename=/dev/nvme0n3 00:35:19.000 [job3] 00:35:19.000 filename=/dev/nvme0n4 00:35:19.000 Could not set queue depth (nvme0n1) 00:35:19.000 Could not set queue depth (nvme0n2) 00:35:19.000 Could not set queue depth (nvme0n3) 00:35:19.000 Could not set queue depth (nvme0n4) 00:35:19.260 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:19.260 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:19.260 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:19.260 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:19.260 fio-3.35 00:35:19.260 Starting 4 threads 00:35:20.639 00:35:20.639 job0: (groupid=0, jobs=1): err= 0: pid=736696: Mon Dec 9 05:30:02 2024 00:35:20.639 read: IOPS=1315, BW=5263KiB/s (5389kB/s)(5268KiB/1001msec) 00:35:20.639 slat (nsec): min=8918, max=26276, avg=9878.16, stdev=1310.02 00:35:20.639 clat (usec): min=196, max=41091, avg=503.39, stdev=2849.06 00:35:20.639 lat (usec): min=206, max=41101, avg=513.26, stdev=2849.36 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 212], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 245], 00:35:20.639 | 30.00th=[ 255], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 310], 00:35:20.639 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 375], 00:35:20.639 | 99.00th=[ 486], 99.50th=[28443], 99.90th=[41157], 99.95th=[41157], 00:35:20.639 | 99.99th=[41157] 00:35:20.639 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:20.639 slat (nsec): min=12225, max=53137, avg=13409.94, stdev=2476.94 00:35:20.639 clat (usec): min=122, max=1320, avg=192.11, stdev=56.68 00:35:20.639 lat (usec): min=145, max=1338, avg=205.52, stdev=56.73 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:35:20.639 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 194], 00:35:20.639 | 70.00th=[ 204], 80.00th=[ 221], 90.00th=[ 255], 95.00th=[ 269], 00:35:20.639 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 1237], 99.95th=[ 1319], 00:35:20.639 | 99.99th=[ 1319] 00:35:20.639 bw ( KiB/s): min= 4096, max= 4096, per=17.16%, avg=4096.00, stdev= 0.00, samples=1 00:35:20.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:20.639 lat (usec) : 250=59.45%, 500=39.99%, 750=0.21% 00:35:20.639 lat (msec) : 2=0.07%, 4=0.04%, 50=0.25% 00:35:20.639 cpu : usr=3.00%, sys=4.70%, ctx=2855, majf=0, minf=1 00:35:20.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 issued rwts: total=1317,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.639 job1: (groupid=0, jobs=1): err= 0: pid=736697: Mon Dec 9 05:30:02 2024 00:35:20.639 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:35:20.639 slat (nsec): min=11788, max=43184, avg=21432.32, stdev=7539.71 00:35:20.639 clat (usec): min=40774, max=42097, avg=41024.28, stdev=256.92 00:35:20.639 lat (usec): min=40800, max=42109, avg=41045.71, stdev=254.26 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:20.639 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:20.639 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:20.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:20.639 | 99.99th=[42206] 00:35:20.639 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:35:20.639 slat (nsec): min=12407, max=39547, avg=13971.63, stdev=2889.49 00:35:20.639 clat (usec): min=156, max=370, avg=178.30, stdev=15.52 00:35:20.639 lat (usec): min=171, max=409, avg=192.27, stdev=16.50 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:35:20.639 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:35:20.639 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:35:20.639 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 371], 99.95th=[ 371], 00:35:20.639 | 99.99th=[ 371] 00:35:20.639 bw ( KiB/s): min= 4096, max= 4096, per=17.16%, avg=4096.00, stdev= 0.00, samples=1 00:35:20.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:20.639 lat (usec) : 250=95.69%, 500=0.19% 00:35:20.639 lat (msec) : 50=4.12% 00:35:20.639 cpu : usr=0.40%, sys=0.70%, ctx=535, majf=0, minf=1 00:35:20.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.639 job2: (groupid=0, jobs=1): err= 0: pid=736701: Mon Dec 9 05:30:02 2024 00:35:20.639 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:35:20.639 slat (nsec): min=9118, max=45121, avg=10046.13, stdev=1548.22 00:35:20.639 clat (usec): min=183, max=697, avg=259.48, stdev=46.03 00:35:20.639 lat (usec): min=193, max=710, avg=269.52, stdev=46.11 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 221], 00:35:20.639 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:35:20.639 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 338], 00:35:20.639 | 99.00th=[ 404], 99.50th=[ 445], 99.90th=[ 570], 99.95th=[ 627], 00:35:20.639 | 99.99th=[ 701] 00:35:20.639 write: IOPS=2400, BW=9602KiB/s (9833kB/s)(9612KiB/1001msec); 0 zone resets 00:35:20.639 slat (nsec): min=12442, max=50187, avg=13607.66, stdev=1813.95 00:35:20.639 clat (usec): min=128, max=353, avg=167.50, stdev=25.21 00:35:20.639 lat (usec): min=141, max=378, avg=181.11, stdev=25.47 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:35:20.639 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 167], 00:35:20.639 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 217], 00:35:20.639 | 99.00th=[ 235], 99.50th=[ 255], 99.90th=[ 318], 99.95th=[ 338], 00:35:20.639 | 99.99th=[ 355] 00:35:20.639 bw ( KiB/s): min=10728, max=10728, per=44.93%, avg=10728.00, stdev= 0.00, samples=1 00:35:20.639 iops : min= 2682, max= 2682, avg=2682.00, stdev= 0.00, samples=1 00:35:20.639 lat (usec) : 250=78.70%, 500=21.23%, 750=0.07% 00:35:20.639 cpu : usr=5.00%, sys=7.10%, ctx=4452, majf=0, minf=1 00:35:20.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.639 issued rwts: total=2048,2403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.639 job3: (groupid=0, jobs=1): err= 0: pid=736702: Mon Dec 9 05:30:02 2024 00:35:20.639 read: IOPS=1324, BW=5299KiB/s (5426kB/s)(5304KiB/1001msec) 00:35:20.639 slat (nsec): min=8737, max=27566, avg=9620.68, stdev=1671.19 00:35:20.639 clat (usec): min=210, max=41060, avg=527.24, stdev=3343.97 00:35:20.639 lat (usec): min=219, max=41084, avg=536.86, stdev=3345.13 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 235], 00:35:20.639 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:35:20.639 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 297], 00:35:20.639 | 99.00th=[ 519], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:20.639 | 99.99th=[41157] 00:35:20.639 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:20.639 slat (nsec): min=11577, max=41420, avg=13070.57, stdev=1358.10 00:35:20.639 clat (usec): min=140, max=319, avg=169.42, stdev=11.09 00:35:20.639 lat (usec): min=153, max=360, avg=182.49, stdev=11.40 00:35:20.639 clat percentiles (usec): 00:35:20.639 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:35:20.639 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:35:20.639 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 190], 00:35:20.639 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 235], 99.95th=[ 318], 00:35:20.639 | 99.99th=[ 318] 00:35:20.639 bw ( KiB/s): min= 4096, max= 4096, per=17.16%, avg=4096.00, stdev= 0.00, samples=1 00:35:20.639 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:20.639 lat (usec) : 250=88.82%, 500=10.66%, 750=0.17%, 1000=0.03% 00:35:20.640 lat (msec) : 50=0.31% 00:35:20.640 cpu : usr=1.90%, sys=3.50%, ctx=2863, majf=0, minf=1 00:35:20.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.640 issued rwts: total=1326,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.640 00:35:20.640 Run status group 0 (all jobs): 00:35:20.640 READ: bw=18.4MiB/s (19.2MB/s), 87.7KiB/s-8184KiB/s (89.8kB/s-8380kB/s), io=18.4MiB (19.3MB), run=1001-1003msec 00:35:20.640 WRITE: bw=23.3MiB/s (24.4MB/s), 2042KiB/s-9602KiB/s (2091kB/s-9833kB/s), io=23.4MiB (24.5MB), run=1001-1003msec 00:35:20.640 00:35:20.640 Disk stats (read/write): 00:35:20.640 nvme0n1: ios=1076/1063, merge=0/0, ticks=1239/200, in_queue=1439, util=99.80% 00:35:20.640 nvme0n2: ios=68/512, merge=0/0, ticks=1416/94, in_queue=1510, util=100.00% 00:35:20.640 nvme0n3: ios=1728/2048, merge=0/0, ticks=1351/321, in_queue=1672, util=100.00% 00:35:20.640 nvme0n4: ios=1040/1024, merge=0/0, ticks=1561/171, in_queue=1732, util=100.00% 00:35:20.640 05:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:20.640 [global] 00:35:20.640 thread=1 00:35:20.640 invalidate=1 00:35:20.640 rw=write 00:35:20.640 time_based=1 00:35:20.640 runtime=1 00:35:20.640 ioengine=libaio 00:35:20.640 direct=1 00:35:20.640 bs=4096 00:35:20.640 iodepth=128 00:35:20.640 norandommap=0 00:35:20.640 numjobs=1 00:35:20.640 00:35:20.640 verify_dump=1 00:35:20.640 verify_backlog=512 00:35:20.640 verify_state_save=0 00:35:20.640 do_verify=1 00:35:20.640 verify=crc32c-intel 00:35:20.640 [job0] 00:35:20.640 filename=/dev/nvme0n1 00:35:20.640 [job1] 00:35:20.640 filename=/dev/nvme0n2 00:35:20.640 [job2] 00:35:20.640 filename=/dev/nvme0n3 00:35:20.640 [job3] 00:35:20.640 filename=/dev/nvme0n4 00:35:20.640 Could not set queue depth (nvme0n1) 00:35:20.640 Could not set queue depth (nvme0n2) 00:35:20.640 Could not set queue depth (nvme0n3) 00:35:20.640 Could not set queue depth (nvme0n4) 00:35:20.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:20.899 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:20.899 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:20.899 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:20.899 fio-3.35 00:35:20.899 Starting 4 threads 00:35:22.279 00:35:22.279 job0: (groupid=0, jobs=1): err= 0: pid=737144: Mon Dec 9 05:30:04 2024 00:35:22.279 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:35:22.279 slat (usec): min=2, max=14881, avg=100.21, stdev=628.31 00:35:22.279 clat (usec): min=2950, max=30555, avg=13121.25, stdev=4162.15 00:35:22.279 lat (usec): min=2961, max=30588, avg=13221.46, stdev=4211.75 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 5014], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10028], 00:35:22.279 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11731], 60.00th=[12125], 00:35:22.279 | 70.00th=[14353], 80.00th=[15926], 90.00th=[20055], 95.00th=[21627], 00:35:22.279 | 99.00th=[24249], 99.50th=[25035], 99.90th=[27395], 99.95th=[27657], 00:35:22.279 | 99.99th=[30540] 00:35:22.279 write: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:35:22.279 slat (usec): min=3, max=39200, avg=93.79, stdev=760.29 00:35:22.279 clat (usec): min=419, max=66242, avg=12255.17, stdev=5519.29 00:35:22.279 lat (usec): min=1888, max=66257, avg=12348.96, stdev=5583.28 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 3949], 5.00th=[ 6587], 10.00th=[ 8848], 20.00th=[ 9896], 00:35:22.279 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12256], 00:35:22.279 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15139], 95.00th=[17171], 00:35:22.279 | 99.00th=[52167], 99.50th=[52167], 99.90th=[61604], 99.95th=[61604], 00:35:22.279 | 99.99th=[66323] 00:35:22.279 bw ( KiB/s): min=19264, max=20368, per=27.95%, avg=19816.00, stdev=780.65, samples=2 00:35:22.279 iops : min= 4816, max= 5092, avg=4954.00, stdev=195.16, samples=2 00:35:22.279 lat (usec) : 500=0.01% 00:35:22.279 lat (msec) : 2=0.06%, 4=0.76%, 10=20.57%, 20=73.09%, 50=4.86% 00:35:22.279 lat (msec) : 100=0.65% 00:35:22.279 cpu : usr=5.89%, sys=8.18%, ctx=385, majf=0, minf=1 00:35:22.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:22.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:22.279 issued rwts: total=4608,5082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:22.279 job1: (groupid=0, jobs=1): err= 0: pid=737145: Mon Dec 9 05:30:04 2024 00:35:22.279 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:35:22.279 slat (usec): min=2, max=15127, avg=103.13, stdev=734.33 00:35:22.279 clat (usec): min=4470, max=70778, avg=13460.16, stdev=8528.58 00:35:22.279 lat (usec): min=4490, max=70789, avg=13563.29, stdev=8603.97 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 5473], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 9241], 00:35:22.279 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10683], 60.00th=[11469], 00:35:22.279 | 70.00th=[13042], 80.00th=[15270], 90.00th=[18744], 95.00th=[32637], 00:35:22.279 | 99.00th=[56886], 99.50th=[60556], 99.90th=[65799], 99.95th=[65799], 00:35:22.279 | 99.99th=[70779] 00:35:22.279 write: IOPS=5126, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:35:22.279 slat (usec): min=2, max=18224, avg=78.39, stdev=590.94 00:35:22.279 clat (usec): min=1090, max=50224, avg=11342.64, stdev=5744.58 00:35:22.279 lat (usec): min=1099, max=50251, avg=11421.03, stdev=5783.20 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7767], 00:35:22.279 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10552], 60.00th=[11207], 00:35:22.279 | 70.00th=[11863], 80.00th=[12256], 90.00th=[13829], 95.00th=[17433], 00:35:22.279 | 99.00th=[44303], 99.50th=[44827], 99.90th=[44827], 99.95th=[45876], 00:35:22.279 | 99.99th=[50070] 00:35:22.279 bw ( KiB/s): min=16032, max=24977, per=28.92%, avg=20504.50, stdev=6325.07, samples=2 00:35:22.279 iops : min= 4008, max= 6244, avg=5126.00, stdev=1581.09, samples=2 00:35:22.279 lat (msec) : 2=0.04%, 4=0.19%, 10=38.00%, 20=55.26%, 50=5.51% 00:35:22.279 lat (msec) : 100=0.99% 00:35:22.279 cpu : usr=5.49%, sys=7.98%, ctx=412, majf=0, minf=2 00:35:22.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:22.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:22.279 issued rwts: total=5120,5142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:22.279 job2: (groupid=0, jobs=1): err= 0: pid=737147: Mon Dec 9 05:30:04 2024 00:35:22.279 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1010msec) 00:35:22.279 slat (nsec): min=1821, max=15419k, avg=115533.18, stdev=791740.28 00:35:22.279 clat (usec): min=4786, max=33218, avg=14927.60, stdev=5130.72 00:35:22.279 lat (usec): min=4799, max=36443, avg=15043.13, stdev=5196.34 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 6652], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[10814], 00:35:22.279 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13435], 60.00th=[15533], 00:35:22.279 | 70.00th=[17171], 80.00th=[19792], 90.00th=[21103], 95.00th=[24511], 00:35:22.279 | 99.00th=[28181], 99.50th=[31065], 99.90th=[33162], 99.95th=[33162], 00:35:22.279 | 99.99th=[33162] 00:35:22.279 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:35:22.279 slat (usec): min=2, max=44697, avg=160.44, stdev=1187.93 00:35:22.279 clat (usec): min=3879, max=54562, avg=18588.17, stdev=10278.93 00:35:22.279 lat (usec): min=3892, max=91650, avg=18748.62, stdev=10423.74 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 5473], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9503], 00:35:22.279 | 30.00th=[10945], 40.00th=[11469], 50.00th=[13304], 60.00th=[19792], 00:35:22.279 | 70.00th=[22938], 80.00th=[30016], 90.00th=[35914], 95.00th=[37487], 00:35:22.279 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:35:22.279 | 99.99th=[54789] 00:35:22.279 bw ( KiB/s): min=12432, max=16264, per=20.23%, avg=14348.00, stdev=2709.63, samples=2 00:35:22.279 iops : min= 3108, max= 4066, avg=3587.00, stdev=677.41, samples=2 00:35:22.279 lat (msec) : 4=0.09%, 10=18.40%, 20=52.40%, 50=29.11%, 100=0.01% 00:35:22.279 cpu : usr=3.37%, sys=4.96%, ctx=357, majf=0, minf=1 00:35:22.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:22.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:22.279 issued rwts: total=3407,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:22.279 job3: (groupid=0, jobs=1): err= 0: pid=737148: Mon Dec 9 05:30:04 2024 00:35:22.279 read: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1003msec) 00:35:22.279 slat (nsec): min=1805, max=25163k, avg=107877.75, stdev=785923.59 00:35:22.279 clat (usec): min=439, max=63506, avg=14321.95, stdev=8278.80 00:35:22.279 lat (usec): min=449, max=63518, avg=14429.83, stdev=8335.79 00:35:22.279 clat percentiles (usec): 00:35:22.279 | 1.00th=[ 865], 5.00th=[ 6063], 10.00th=[ 8455], 20.00th=[ 9503], 00:35:22.279 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12387], 60.00th=[13566], 00:35:22.279 | 70.00th=[15270], 80.00th=[16909], 90.00th=[21365], 95.00th=[26608], 00:35:22.279 | 99.00th=[52167], 99.50th=[52167], 99.90th=[54789], 99.95th=[58459], 00:35:22.280 | 99.99th=[63701] 00:35:22.280 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:35:22.280 slat (usec): min=2, max=13314, avg=128.61, stdev=718.87 00:35:22.280 clat (msec): min=5, max=102, avg=17.72, stdev=14.65 00:35:22.280 lat (msec): min=5, max=102, avg=17.85, stdev=14.74 00:35:22.280 clat percentiles (msec): 00:35:22.280 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:35:22.280 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:35:22.280 | 70.00th=[ 18], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 35], 00:35:22.280 | 99.00th=[ 93], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:35:22.280 | 99.99th=[ 103] 00:35:22.280 bw ( KiB/s): min=14408, max=18396, per=23.13%, avg=16402.00, stdev=2819.94, samples=2 00:35:22.280 iops : min= 3602, max= 4599, avg=4100.50, stdev=704.99, samples=2 00:35:22.280 lat (usec) : 500=0.03%, 750=0.08%, 1000=0.58% 00:35:22.280 lat (msec) : 2=0.14%, 4=0.63%, 10=18.52%, 20=60.55%, 50=16.90% 00:35:22.280 lat (msec) : 100=2.29%, 250=0.29% 00:35:22.280 cpu : usr=5.39%, sys=5.09%, ctx=418, majf=0, minf=2 00:35:22.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:22.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:22.280 issued rwts: total=3816,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:22.280 00:35:22.280 Run status group 0 (all jobs): 00:35:22.280 READ: bw=65.6MiB/s (68.7MB/s), 13.2MiB/s-19.9MiB/s (13.8MB/s-20.9MB/s), io=66.2MiB (69.4MB), run=1003-1010msec 00:35:22.280 WRITE: bw=69.2MiB/s (72.6MB/s), 13.9MiB/s-20.0MiB/s (14.5MB/s-21.0MB/s), io=69.9MiB (73.3MB), run=1003-1010msec 00:35:22.280 00:35:22.280 Disk stats (read/write): 00:35:22.280 nvme0n1: ios=3812/4096, merge=0/0, ticks=18600/21974, in_queue=40574, util=83.27% 00:35:22.280 nvme0n2: ios=4146/4420, merge=0/0, ticks=41942/41471, in_queue=83413, util=87.65% 00:35:22.280 nvme0n3: ios=2856/3072, merge=0/0, ticks=29844/34221, in_queue=64065, util=95.40% 00:35:22.280 nvme0n4: ios=3439/3584, merge=0/0, ticks=25771/38933, in_queue=64704, util=95.22% 00:35:22.280 05:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:22.280 [global] 00:35:22.280 thread=1 00:35:22.280 invalidate=1 00:35:22.280 rw=randwrite 00:35:22.280 time_based=1 00:35:22.280 runtime=1 00:35:22.280 ioengine=libaio 00:35:22.280 direct=1 00:35:22.280 bs=4096 00:35:22.280 iodepth=128 00:35:22.280 norandommap=0 00:35:22.280 numjobs=1 00:35:22.280 00:35:22.280 verify_dump=1 00:35:22.280 verify_backlog=512 00:35:22.280 verify_state_save=0 00:35:22.280 do_verify=1 00:35:22.280 verify=crc32c-intel 00:35:22.280 [job0] 00:35:22.280 filename=/dev/nvme0n1 00:35:22.280 [job1] 00:35:22.280 filename=/dev/nvme0n2 00:35:22.280 [job2] 00:35:22.280 filename=/dev/nvme0n3 00:35:22.280 [job3] 00:35:22.280 filename=/dev/nvme0n4 00:35:22.280 Could not set queue depth (nvme0n1) 00:35:22.280 Could not set queue depth (nvme0n2) 00:35:22.280 Could not set queue depth (nvme0n3) 00:35:22.280 Could not set queue depth (nvme0n4) 00:35:22.539 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:22.539 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:22.539 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:22.539 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:22.539 fio-3.35 00:35:22.539 Starting 4 threads 00:35:23.920 00:35:23.920 job0: (groupid=0, jobs=1): err= 0: pid=737776: Mon Dec 9 05:30:06 2024 00:35:23.920 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:35:23.920 slat (usec): min=2, max=32212, avg=121.98, stdev=933.44 00:35:23.920 clat (usec): min=4495, max=51795, avg=15454.00, stdev=7169.12 00:35:23.920 lat (usec): min=4507, max=60910, avg=15575.98, stdev=7246.67 00:35:23.920 clat percentiles (usec): 00:35:23.920 | 1.00th=[ 6456], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[10945], 00:35:23.920 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13435], 60.00th=[14746], 00:35:23.920 | 70.00th=[15533], 80.00th=[17171], 90.00th=[22152], 95.00th=[35390], 00:35:23.920 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:35:23.920 | 99.99th=[51643] 00:35:23.920 write: IOPS=4106, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:35:23.920 slat (usec): min=3, max=14900, avg=104.32, stdev=763.89 00:35:23.920 clat (usec): min=1974, max=51760, avg=15551.14, stdev=7458.94 00:35:23.920 lat (usec): min=1992, max=51766, avg=15655.46, stdev=7512.83 00:35:23.920 clat percentiles (usec): 00:35:23.920 | 1.00th=[ 3621], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[ 9896], 00:35:23.920 | 30.00th=[10683], 40.00th=[12518], 50.00th=[13566], 60.00th=[15139], 00:35:23.920 | 70.00th=[17957], 80.00th=[19792], 90.00th=[20317], 95.00th=[30802], 00:35:23.920 | 99.00th=[45876], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:35:23.920 | 99.99th=[51643] 00:35:23.920 bw ( KiB/s): min=16351, max=16384, per=22.72%, avg=16367.50, stdev=23.33, samples=2 00:35:23.920 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:35:23.920 lat (msec) : 2=0.02%, 4=0.81%, 10=12.94%, 20=70.69%, 50=15.35% 00:35:23.920 lat (msec) : 100=0.18% 00:35:23.920 cpu : usr=5.08%, sys=6.97%, ctx=262, majf=0, minf=2 00:35:23.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:23.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:23.920 issued rwts: total=4096,4127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:23.920 job1: (groupid=0, jobs=1): err= 0: pid=737794: Mon Dec 9 05:30:06 2024 00:35:23.920 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:35:23.920 slat (usec): min=2, max=21489, avg=159.84, stdev=1115.74 00:35:23.920 clat (usec): min=2283, max=68465, avg=21561.71, stdev=12512.81 00:35:23.920 lat (usec): min=2293, max=76176, avg=21721.55, stdev=12616.10 00:35:23.920 clat percentiles (usec): 00:35:23.920 | 1.00th=[ 3392], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[11863], 00:35:23.920 | 30.00th=[12649], 40.00th=[12911], 50.00th=[19530], 60.00th=[22414], 00:35:23.920 | 70.00th=[26608], 80.00th=[31589], 90.00th=[38536], 95.00th=[45351], 00:35:23.920 | 99.00th=[58459], 99.50th=[65274], 99.90th=[68682], 99.95th=[68682], 00:35:23.920 | 99.99th=[68682] 00:35:23.920 write: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1005msec); 0 zone resets 00:35:23.920 slat (usec): min=3, max=35037, avg=208.36, stdev=1393.93 00:35:23.920 clat (usec): min=1819, max=106241, avg=26846.27, stdev=20213.57 00:35:23.920 lat (msec): min=5, max=106, avg=27.05, stdev=20.34 00:35:23.920 clat percentiles (msec): 00:35:23.920 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:35:23.920 | 30.00th=[ 17], 40.00th=[ 21], 50.00th=[ 21], 60.00th=[ 24], 00:35:23.920 | 70.00th=[ 30], 80.00th=[ 35], 90.00th=[ 54], 95.00th=[ 72], 00:35:23.920 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:35:23.920 | 99.99th=[ 107] 00:35:23.920 bw ( KiB/s): min= 8223, max=12288, per=14.24%, avg=10255.50, stdev=2874.39, samples=2 00:35:23.920 iops : min= 2055, max= 3072, avg=2563.50, stdev=719.13, samples=2 00:35:23.920 lat (msec) : 2=0.02%, 4=0.88%, 10=14.66%, 20=29.79%, 50=47.32% 00:35:23.920 lat (msec) : 100=6.63%, 250=0.71% 00:35:23.920 cpu : usr=2.49%, sys=4.38%, ctx=303, majf=0, minf=1 00:35:23.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:35:23.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:23.920 issued rwts: total=2560,2677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:23.920 job2: (groupid=0, jobs=1): err= 0: pid=737811: Mon Dec 9 05:30:06 2024 00:35:23.920 read: IOPS=5228, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1006msec) 00:35:23.920 slat (usec): min=2, max=14929, avg=87.84, stdev=752.22 00:35:23.920 clat (usec): min=3863, max=34284, avg=12110.83, stdev=3871.81 00:35:23.920 lat (usec): min=5910, max=34300, avg=12198.67, stdev=3923.59 00:35:23.920 clat percentiles (usec): 00:35:23.920 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[ 9896], 00:35:23.920 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:35:23.920 | 70.00th=[11994], 80.00th=[14222], 90.00th=[17433], 95.00th=[19268], 00:35:23.920 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:35:23.921 | 99.99th=[34341] 00:35:23.921 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:35:23.921 slat (usec): min=2, max=18439, avg=82.81, stdev=728.24 00:35:23.921 clat (usec): min=1173, max=35341, avg=11316.38, stdev=3624.00 00:35:23.921 lat (usec): min=1186, max=35368, avg=11399.19, stdev=3682.01 00:35:23.921 clat percentiles (usec): 00:35:23.921 | 1.00th=[ 5735], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 8225], 00:35:23.921 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[10945], 60.00th=[11338], 00:35:23.921 | 70.00th=[11863], 80.00th=[14615], 90.00th=[16909], 95.00th=[17957], 00:35:23.921 | 99.00th=[20579], 99.50th=[20841], 99.90th=[27919], 99.95th=[30278], 00:35:23.921 | 99.99th=[35390] 00:35:23.921 bw ( KiB/s): min=20528, max=24479, per=31.24%, avg=22503.50, stdev=2793.78, samples=2 00:35:23.921 iops : min= 5132, max= 6119, avg=5625.50, stdev=697.91, samples=2 00:35:23.921 lat (msec) : 2=0.07%, 4=0.10%, 10=29.03%, 20=67.48%, 50=3.31% 00:35:23.921 cpu : usr=6.97%, sys=7.46%, ctx=290, majf=0, minf=1 00:35:23.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:23.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:23.921 issued rwts: total=5260,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:23.921 job3: (groupid=0, jobs=1): err= 0: pid=737817: Mon Dec 9 05:30:06 2024 00:35:23.921 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:35:23.921 slat (usec): min=2, max=12526, avg=92.27, stdev=766.56 00:35:23.921 clat (usec): min=4424, max=31688, avg=11974.04, stdev=3285.19 00:35:23.921 lat (usec): min=4436, max=31722, avg=12066.31, stdev=3357.28 00:35:23.921 clat percentiles (usec): 00:35:23.921 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9896], 00:35:23.921 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[11207], 00:35:23.921 | 70.00th=[11731], 80.00th=[14484], 90.00th=[17695], 95.00th=[19268], 00:35:23.921 | 99.00th=[21890], 99.50th=[22414], 99.90th=[24773], 99.95th=[24773], 00:35:23.921 | 99.99th=[31589] 00:35:23.921 write: IOPS=5680, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1009msec); 0 zone resets 00:35:23.921 slat (usec): min=2, max=9339, avg=75.40, stdev=559.19 00:35:23.921 clat (usec): min=3223, max=22384, avg=10561.10, stdev=2627.13 00:35:23.921 lat (usec): min=3235, max=22388, avg=10636.50, stdev=2654.86 00:35:23.921 clat percentiles (usec): 00:35:23.921 | 1.00th=[ 4883], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 8160], 00:35:23.921 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:35:23.921 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13960], 95.00th=[15270], 00:35:23.921 | 99.00th=[16581], 99.50th=[17695], 99.90th=[20317], 99.95th=[22414], 00:35:23.921 | 99.99th=[22414] 00:35:23.921 bw ( KiB/s): min=20455, max=24560, per=31.25%, avg=22507.50, stdev=2902.67, samples=2 00:35:23.921 iops : min= 5113, max= 6140, avg=5626.50, stdev=726.20, samples=2 00:35:23.921 lat (msec) : 4=0.26%, 10=29.07%, 20=69.13%, 50=1.53% 00:35:23.921 cpu : usr=7.34%, sys=7.54%, ctx=349, majf=0, minf=1 00:35:23.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:23.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:23.921 issued rwts: total=5632,5732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:23.921 00:35:23.921 Run status group 0 (all jobs): 00:35:23.921 READ: bw=67.9MiB/s (71.2MB/s), 9.95MiB/s-21.8MiB/s (10.4MB/s-22.9MB/s), io=68.5MiB (71.9MB), run=1005-1009msec 00:35:23.921 WRITE: bw=70.3MiB/s (73.8MB/s), 10.4MiB/s-22.2MiB/s (10.9MB/s-23.3MB/s), io=71.0MiB (74.4MB), run=1005-1009msec 00:35:23.921 00:35:23.921 Disk stats (read/write): 00:35:23.921 nvme0n1: ios=3247/3584, merge=0/0, ticks=48125/52100, in_queue=100225, util=84.17% 00:35:23.921 nvme0n2: ios=2099/2103, merge=0/0, ticks=24009/29010, in_queue=53019, util=99.49% 00:35:23.921 nvme0n3: ios=4266/4608, merge=0/0, ticks=48813/51224, in_queue=100037, util=88.06% 00:35:23.921 nvme0n4: ios=4608/4839, merge=0/0, ticks=51136/48463, in_queue=99599, util=89.30% 00:35:23.921 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:23.921 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=738253 00:35:23.921 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:23.921 05:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:23.921 [global] 00:35:23.921 thread=1 00:35:23.921 invalidate=1 00:35:23.921 rw=read 00:35:23.921 time_based=1 00:35:23.921 runtime=10 00:35:23.921 ioengine=libaio 00:35:23.921 direct=1 00:35:23.921 bs=4096 00:35:23.921 iodepth=1 00:35:23.921 norandommap=1 00:35:23.921 numjobs=1 00:35:23.921 00:35:23.921 [job0] 00:35:23.921 filename=/dev/nvme0n1 00:35:23.921 [job1] 00:35:23.921 filename=/dev/nvme0n2 00:35:23.921 [job2] 00:35:23.921 filename=/dev/nvme0n3 00:35:23.921 [job3] 00:35:23.921 filename=/dev/nvme0n4 00:35:23.921 Could not set queue depth (nvme0n1) 00:35:23.921 Could not set queue depth (nvme0n2) 00:35:23.921 Could not set queue depth (nvme0n3) 00:35:23.921 Could not set queue depth (nvme0n4) 00:35:24.179 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.179 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.179 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.179 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:24.179 fio-3.35 00:35:24.179 Starting 4 threads 00:35:26.712 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:26.973 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36818944, buflen=4096 00:35:26.973 fio: pid=738424, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:26.973 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:27.232 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35594240, buflen=4096 00:35:27.232 fio: pid=738423, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:27.232 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:27.232 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:27.492 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=48336896, buflen=4096 00:35:27.492 fio: pid=738420, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:27.492 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:27.492 05:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:27.752 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49836032, buflen=4096 00:35:27.752 fio: pid=738422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:27.752 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:27.752 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:27.752 00:35:27.752 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=738420: Mon Dec 9 05:30:10 2024 00:35:27.752 read: IOPS=3869, BW=15.1MiB/s (15.8MB/s)(46.1MiB/3050msec) 00:35:27.752 slat (usec): min=8, max=11583, avg=12.16, stdev=141.37 00:35:27.752 clat (usec): min=167, max=2363, avg=242.61, stdev=33.93 00:35:27.752 lat (usec): min=177, max=12006, avg=254.78, stdev=147.65 00:35:27.752 clat percentiles (usec): 00:35:27.752 | 1.00th=[ 188], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:35:27.752 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:35:27.752 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:35:27.752 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 375], 00:35:27.752 | 99.99th=[ 457] 00:35:27.752 bw ( KiB/s): min=15064, max=16152, per=30.43%, avg=15480.00, stdev=515.83, samples=5 00:35:27.752 iops : min= 3766, max= 4038, avg=3870.00, stdev=128.96, samples=5 00:35:27.752 lat (usec) : 250=64.04%, 500=35.94% 00:35:27.752 lat (msec) : 4=0.01% 00:35:27.752 cpu : usr=2.62%, sys=7.02%, ctx=11808, majf=0, minf=1 00:35:27.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 issued rwts: total=11802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.752 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=738422: Mon Dec 9 05:30:10 2024 00:35:27.752 read: IOPS=3715, BW=14.5MiB/s (15.2MB/s)(47.5MiB/3275msec) 00:35:27.752 slat (usec): min=2, max=11701, avg=11.58, stdev=170.78 00:35:27.752 clat (usec): min=166, max=41318, avg=254.49, stdev=533.81 00:35:27.752 lat (usec): min=174, max=41326, avg=266.07, stdev=560.71 00:35:27.752 clat percentiles (usec): 00:35:27.752 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 221], 00:35:27.752 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:35:27.752 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:35:27.752 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 429], 00:35:27.752 | 99.99th=[41157] 00:35:27.752 bw ( KiB/s): min=13248, max=15454, per=29.04%, avg=14774.33, stdev=810.75, samples=6 00:35:27.752 iops : min= 3312, max= 3863, avg=3693.50, stdev=202.60, samples=6 00:35:27.752 lat (usec) : 250=58.42%, 500=41.54%, 750=0.01% 00:35:27.752 lat (msec) : 10=0.02%, 50=0.02% 00:35:27.752 cpu : usr=1.41%, sys=4.03%, ctx=12174, majf=0, minf=2 00:35:27.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 issued rwts: total=12168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.752 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=738423: Mon Dec 9 05:30:10 2024 00:35:27.752 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(33.9MiB/2847msec) 00:35:27.752 slat (usec): min=8, max=11474, avg=11.56, stdev=157.23 00:35:27.752 clat (usec): min=214, max=41039, avg=312.29, stdev=752.47 00:35:27.752 lat (usec): min=223, max=41064, avg=323.85, stdev=769.60 00:35:27.752 clat percentiles (usec): 00:35:27.752 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 277], 00:35:27.752 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:35:27.752 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 334], 00:35:27.752 | 99.00th=[ 343], 99.50th=[ 351], 99.90th=[ 420], 99.95th=[ 523], 00:35:27.752 | 99.99th=[41157] 00:35:27.752 bw ( KiB/s): min=11096, max=12976, per=24.09%, avg=12252.80, stdev=693.73, samples=5 00:35:27.752 iops : min= 2774, max= 3244, avg=3063.20, stdev=173.43, samples=5 00:35:27.752 lat (usec) : 250=11.06%, 500=88.86%, 750=0.03% 00:35:27.752 lat (msec) : 50=0.03% 00:35:27.752 cpu : usr=1.12%, sys=3.55%, ctx=8695, majf=0, minf=1 00:35:27.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 issued rwts: total=8691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.752 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=738424: Mon Dec 9 05:30:10 2024 00:35:27.752 read: IOPS=3433, BW=13.4MiB/s (14.1MB/s)(35.1MiB/2618msec) 00:35:27.752 slat (nsec): min=8296, max=70483, avg=9503.27, stdev=1518.81 00:35:27.752 clat (usec): min=209, max=41108, avg=279.36, stdev=1053.26 00:35:27.752 lat (usec): min=219, max=41130, avg=288.86, stdev=1053.67 00:35:27.752 clat percentiles (usec): 00:35:27.752 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:35:27.752 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:35:27.752 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 297], 00:35:27.752 | 99.00th=[ 363], 99.50th=[ 412], 99.90th=[ 506], 99.95th=[41157], 00:35:27.752 | 99.99th=[41157] 00:35:27.752 bw ( KiB/s): min= 8408, max=15872, per=26.98%, avg=13723.20, stdev=3172.97, samples=5 00:35:27.752 iops : min= 2102, max= 3968, avg=3430.80, stdev=793.24, samples=5 00:35:27.752 lat (usec) : 250=61.46%, 500=38.42%, 750=0.03% 00:35:27.752 lat (msec) : 4=0.01%, 50=0.07% 00:35:27.752 cpu : usr=1.30%, sys=4.62%, ctx=8991, majf=0, minf=2 00:35:27.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.752 issued rwts: total=8990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:27.752 00:35:27.752 Run status group 0 (all jobs): 00:35:27.752 READ: bw=49.7MiB/s (52.1MB/s), 11.9MiB/s-15.1MiB/s (12.5MB/s-15.8MB/s), io=163MiB (171MB), run=2618-3275msec 00:35:27.752 00:35:27.752 Disk stats (read/write): 00:35:27.752 nvme0n1: ios=10874/0, merge=0/0, ticks=3260/0, in_queue=3260, util=98.30% 00:35:27.752 nvme0n2: ios=11301/0, merge=0/0, ticks=2822/0, in_queue=2822, util=94.32% 00:35:27.752 nvme0n3: ios=8630/0, merge=0/0, ticks=2631/0, in_queue=2631, util=95.48% 00:35:27.752 nvme0n4: ios=8799/0, merge=0/0, ticks=2400/0, in_queue=2400, util=96.39% 00:35:27.752 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:27.752 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:28.012 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:28.012 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:28.271 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:28.271 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:28.531 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:28.531 05:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 738253 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:28.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:28.790 nvmf hotplug test: fio failed as expected 00:35:28.790 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:29.051 rmmod nvme_tcp 00:35:29.051 rmmod nvme_fabrics 00:35:29.051 rmmod nvme_keyring 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 734895 ']' 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 734895 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 734895 ']' 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 734895 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.051 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734895 00:35:29.310 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:29.310 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:29.310 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734895' 00:35:29.310 killing process with pid 734895 00:35:29.310 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 734895 00:35:29.310 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 734895 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.311 05:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:31.867 00:35:31.867 real 0m28.144s 00:35:31.867 user 1m42.821s 00:35:31.867 sys 0m15.879s 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:31.867 ************************************ 00:35:31.867 END TEST nvmf_fio_target 00:35:31.867 ************************************ 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:31.867 ************************************ 00:35:31.867 START TEST nvmf_bdevio 00:35:31.867 ************************************ 00:35:31.867 05:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:31.867 * Looking for test storage... 00:35:31.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:31.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.867 --rc genhtml_branch_coverage=1 00:35:31.867 --rc genhtml_function_coverage=1 00:35:31.867 --rc genhtml_legend=1 00:35:31.867 --rc geninfo_all_blocks=1 00:35:31.867 --rc geninfo_unexecuted_blocks=1 00:35:31.867 00:35:31.867 ' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:31.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.867 --rc genhtml_branch_coverage=1 00:35:31.867 --rc genhtml_function_coverage=1 00:35:31.867 --rc genhtml_legend=1 00:35:31.867 --rc geninfo_all_blocks=1 00:35:31.867 --rc geninfo_unexecuted_blocks=1 00:35:31.867 00:35:31.867 ' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:31.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.867 --rc genhtml_branch_coverage=1 00:35:31.867 --rc genhtml_function_coverage=1 00:35:31.867 --rc genhtml_legend=1 00:35:31.867 --rc geninfo_all_blocks=1 00:35:31.867 --rc geninfo_unexecuted_blocks=1 00:35:31.867 00:35:31.867 ' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:31.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.867 --rc genhtml_branch_coverage=1 00:35:31.867 --rc genhtml_function_coverage=1 00:35:31.867 --rc genhtml_legend=1 00:35:31.867 --rc geninfo_all_blocks=1 00:35:31.867 --rc geninfo_unexecuted_blocks=1 00:35:31.867 00:35:31.867 ' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.867 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.868 05:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:39.991 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:39.992 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:39.992 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:39.992 Found net devices under 0000:af:00.0: cvl_0_0 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:39.992 Found net devices under 0000:af:00.1: cvl_0_1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:35:39.992 00:35:39.992 --- 10.0.0.2 ping statistics --- 00:35:39.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.992 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:35:39.992 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:39.992 00:35:39.992 --- 10.0.0.1 ping statistics --- 00:35:39.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.992 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=742939 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 742939 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 742939 ']' 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.993 05:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:39.993 [2024-12-09 05:30:21.527603] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:39.993 [2024-12-09 05:30:21.528632] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:35:39.993 [2024-12-09 05:30:21.528670] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.993 [2024-12-09 05:30:21.629089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:39.993 [2024-12-09 05:30:21.669472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.993 [2024-12-09 05:30:21.669513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.993 [2024-12-09 05:30:21.669523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.993 [2024-12-09 05:30:21.669533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.993 [2024-12-09 05:30:21.669542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.993 [2024-12-09 05:30:21.671163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:39.993 [2024-12-09 05:30:21.671257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:39.993 [2024-12-09 05:30:21.671363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:39.993 [2024-12-09 05:30:21.671365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:39.993 [2024-12-09 05:30:21.739774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.993 [2024-12-09 05:30:21.740673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:39.993 [2024-12-09 05:30:21.740756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:39.993 [2024-12-09 05:30:21.740953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:39.993 [2024-12-09 05:30:21.741031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.993 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:39.993 [2024-12-09 05:30:22.432230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:40.253 Malloc0 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:40.253 [2024-12-09 05:30:22.520527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:40.253 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:40.253 { 00:35:40.253 "params": { 00:35:40.253 "name": "Nvme$subsystem", 00:35:40.253 "trtype": "$TEST_TRANSPORT", 00:35:40.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.253 "adrfam": "ipv4", 00:35:40.253 "trsvcid": "$NVMF_PORT", 00:35:40.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.254 "hdgst": ${hdgst:-false}, 00:35:40.254 "ddgst": ${ddgst:-false} 00:35:40.254 }, 00:35:40.254 "method": "bdev_nvme_attach_controller" 00:35:40.254 } 00:35:40.254 EOF 00:35:40.254 )") 00:35:40.254 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:40.254 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:40.254 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:40.254 05:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:40.254 "params": { 00:35:40.254 "name": "Nvme1", 00:35:40.254 "trtype": "tcp", 00:35:40.254 "traddr": "10.0.0.2", 00:35:40.254 "adrfam": "ipv4", 00:35:40.254 "trsvcid": "4420", 00:35:40.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.254 "hdgst": false, 00:35:40.254 "ddgst": false 00:35:40.254 }, 00:35:40.254 "method": "bdev_nvme_attach_controller" 00:35:40.254 }' 00:35:40.254 [2024-12-09 05:30:22.573796] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:35:40.254 [2024-12-09 05:30:22.573843] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743223 ] 00:35:40.254 [2024-12-09 05:30:22.665886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.254 [2024-12-09 05:30:22.710442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.254 [2024-12-09 05:30:22.710552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.254 [2024-12-09 05:30:22.710552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.822 I/O targets: 00:35:40.822 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:40.822 00:35:40.822 00:35:40.822 CUnit - A unit testing framework for C - Version 2.1-3 00:35:40.822 http://cunit.sourceforge.net/ 00:35:40.822 00:35:40.822 00:35:40.822 Suite: bdevio tests on: Nvme1n1 00:35:40.823 Test: blockdev write read block ...passed 00:35:40.823 Test: blockdev write zeroes read block ...passed 00:35:40.823 Test: blockdev write zeroes read no split ...passed 00:35:40.823 Test: blockdev write zeroes read split ...passed 00:35:40.823 Test: blockdev write zeroes read split partial ...passed 00:35:40.823 Test: blockdev reset ...[2024-12-09 05:30:23.136873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:40.823 [2024-12-09 05:30:23.136940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x997890 (9): Bad file descriptor 00:35:40.823 [2024-12-09 05:30:23.140790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:40.823 passed 00:35:40.823 Test: blockdev write read 8 blocks ...passed 00:35:40.823 Test: blockdev write read size > 128k ...passed 00:35:40.823 Test: blockdev write read invalid size ...passed 00:35:40.823 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:40.823 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:40.823 Test: blockdev write read max offset ...passed 00:35:41.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:41.082 Test: blockdev writev readv 8 blocks ...passed 00:35:41.082 Test: blockdev writev readv 30 x 1block ...passed 00:35:41.082 Test: blockdev writev readv block ...passed 00:35:41.082 Test: blockdev writev readv size > 128k ...passed 00:35:41.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:41.083 Test: blockdev comparev and writev ...[2024-12-09 05:30:23.393283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.393315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.393331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.393342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.393640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.393652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.393676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.393966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.393984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.394310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.394323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.394337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:41.083 [2024-12-09 05:30:23.394346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:41.083 passed 00:35:41.083 Test: blockdev nvme passthru rw ...passed 00:35:41.083 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:30:23.476612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:41.083 [2024-12-09 05:30:23.476630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.476749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:41.083 [2024-12-09 05:30:23.476761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.476875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:41.083 [2024-12-09 05:30:23.476887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:41.083 [2024-12-09 05:30:23.477001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:41.083 [2024-12-09 05:30:23.477013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:41.083 passed 00:35:41.083 Test: blockdev nvme admin passthru ...passed 00:35:41.083 Test: blockdev copy ...passed 00:35:41.083 00:35:41.083 Run Summary: Type Total Ran Passed Failed Inactive 00:35:41.083 suites 1 1 n/a 0 0 00:35:41.083 tests 23 23 23 0 0 00:35:41.083 asserts 152 152 152 0 n/a 00:35:41.083 00:35:41.083 Elapsed time = 1.090 seconds 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:41.343 rmmod nvme_tcp 00:35:41.343 rmmod nvme_fabrics 00:35:41.343 rmmod nvme_keyring 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 742939 ']' 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 742939 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 742939 ']' 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 742939 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.343 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 742939 00:35:41.603 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:41.603 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:41.603 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 742939' 00:35:41.603 killing process with pid 742939 00:35:41.603 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 742939 00:35:41.603 05:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 742939 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.863 05:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.771 00:35:43.771 real 0m12.228s 00:35:43.771 user 0m9.809s 00:35:43.771 sys 0m6.731s 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:43.771 ************************************ 00:35:43.771 END TEST nvmf_bdevio 00:35:43.771 ************************************ 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:43.771 00:35:43.771 real 5m0.455s 00:35:43.771 user 9m18.980s 00:35:43.771 sys 2m25.591s 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.771 05:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:43.771 ************************************ 00:35:43.771 END TEST nvmf_target_core_interrupt_mode 00:35:43.771 ************************************ 00:35:44.031 05:30:26 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:44.031 05:30:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:44.031 05:30:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.031 05:30:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.031 ************************************ 00:35:44.031 START TEST nvmf_interrupt 00:35:44.031 ************************************ 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:44.031 * Looking for test storage... 00:35:44.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:44.031 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:44.032 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.292 --rc genhtml_branch_coverage=1 00:35:44.292 --rc genhtml_function_coverage=1 00:35:44.292 --rc genhtml_legend=1 00:35:44.292 --rc geninfo_all_blocks=1 00:35:44.292 --rc geninfo_unexecuted_blocks=1 00:35:44.292 00:35:44.292 ' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.292 --rc genhtml_branch_coverage=1 00:35:44.292 --rc genhtml_function_coverage=1 00:35:44.292 --rc genhtml_legend=1 00:35:44.292 --rc geninfo_all_blocks=1 00:35:44.292 --rc geninfo_unexecuted_blocks=1 00:35:44.292 00:35:44.292 ' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.292 --rc genhtml_branch_coverage=1 00:35:44.292 --rc genhtml_function_coverage=1 00:35:44.292 --rc genhtml_legend=1 00:35:44.292 --rc geninfo_all_blocks=1 00:35:44.292 --rc geninfo_unexecuted_blocks=1 00:35:44.292 00:35:44.292 ' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:44.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.292 --rc genhtml_branch_coverage=1 00:35:44.292 --rc genhtml_function_coverage=1 00:35:44.292 --rc genhtml_legend=1 00:35:44.292 --rc geninfo_all_blocks=1 00:35:44.292 --rc geninfo_unexecuted_blocks=1 00:35:44.292 00:35:44.292 ' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.292 05:30:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:44.293 05:30:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:52.418 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:52.419 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:52.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:52.419 Found net devices under 0000:af:00.0: cvl_0_0 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:52.419 Found net devices under 0000:af:00.1: cvl_0_1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:52.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:52.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:35:52.419 00:35:52.419 --- 10.0.0.2 ping statistics --- 00:35:52.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.419 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:52.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:52.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:35:52.419 00:35:52.419 --- 10.0.0.1 ping statistics --- 00:35:52.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.419 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:52.419 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=747142 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 747142 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 747142 ']' 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.420 05:30:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 [2024-12-09 05:30:33.859341] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:52.420 [2024-12-09 05:30:33.860365] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:35:52.420 [2024-12-09 05:30:33.860409] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.420 [2024-12-09 05:30:33.958837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:52.420 [2024-12-09 05:30:34.000112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.420 [2024-12-09 05:30:34.000150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.420 [2024-12-09 05:30:34.000159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.420 [2024-12-09 05:30:34.000168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.420 [2024-12-09 05:30:34.000175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.420 [2024-12-09 05:30:34.001495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.420 [2024-12-09 05:30:34.001495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.420 [2024-12-09 05:30:34.070655] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:52.420 [2024-12-09 05:30:34.071230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:52.420 [2024-12-09 05:30:34.071461] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:52.420 5000+0 records in 00:35:52.420 5000+0 records out 00:35:52.420 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0256468 s, 399 MB/s 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 AIO0 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 [2024-12-09 05:30:34.802263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:52.420 [2024-12-09 05:30:34.842581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 747142 0 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 0 idle 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:35:52.420 05:30:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747142 root 20 0 128.2g 44800 33152 S 0.0 0.1 0:00.28 reactor_0' 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747142 root 20 0 128.2g 44800 33152 S 0.0 0.1 0:00.28 reactor_0 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 747142 1 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 1 idle 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:35:52.680 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747184 root 20 0 128.2g 44800 33152 S 0.0 0.1 0:00.00 reactor_1' 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747184 root 20 0 128.2g 44800 33152 S 0.0 0.1 0:00.00 reactor_1 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=747363 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 747142 0 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 747142 0 busy 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:35:52.939 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747142 root 20 0 128.2g 45696 33152 R 99.9 0.1 0:00.46 reactor_0' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747142 root 20 0 128.2g 45696 33152 R 99.9 0.1 0:00.46 reactor_0 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 747142 1 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 747142 1 busy 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747184 root 20 0 128.2g 45696 33152 R 99.9 0.1 0:00.27 reactor_1' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747184 root 20 0 128.2g 45696 33152 R 99.9 0.1 0:00.27 reactor_1 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:53.198 05:30:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 747363 00:36:03.183 Initializing NVMe Controllers 00:36:03.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:03.183 Controller IO queue size 256, less than required. 00:36:03.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:03.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:03.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:03.183 Initialization complete. Launching workers. 00:36:03.183 ======================================================== 00:36:03.183 Latency(us) 00:36:03.183 Device Information : IOPS MiB/s Average min max 00:36:03.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 17021.80 66.49 15047.89 2864.55 29883.44 00:36:03.183 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16856.90 65.85 15191.27 7257.26 28997.51 00:36:03.183 ======================================================== 00:36:03.183 Total : 33878.69 132.34 15119.23 2864.55 29883.44 00:36:03.183 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 747142 0 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 0 idle 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747142 root 20 0 128.2g 45696 33152 S 6.2 0.1 0:20.28 reactor_0' 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747142 root 20 0 128.2g 45696 33152 S 6.2 0.1 0:20.28 reactor_0 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 747142 1 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 1 idle 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:36:03.183 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747184 root 20 0 128.2g 45696 33152 S 0.0 0.1 0:10.00 reactor_1' 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747184 root 20 0 128.2g 45696 33152 S 0.0 0.1 0:10.00 reactor_1 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:03.442 05:30:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:04.011 05:30:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:04.011 05:30:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:04.011 05:30:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:04.011 05:30:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:04.011 05:30:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 747142 0 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 0 idle 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:36:05.917 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747142 root 20 0 128.2g 76160 33152 S 6.7 0.1 0:20.60 reactor_0' 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747142 root 20 0 128.2g 76160 33152 S 6.7 0.1 0:20.60 reactor_0 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 747142 1 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 747142 1 idle 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=747142 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 747142 -w 256 00:36:06.178 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 747184 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.12 reactor_1' 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 747184 root 20 0 128.2g 76160 33152 S 0.0 0.1 0:10.12 reactor_1 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:06.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:06.438 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.698 rmmod nvme_tcp 00:36:06.698 rmmod nvme_fabrics 00:36:06.698 rmmod nvme_keyring 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 747142 ']' 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 747142 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 747142 ']' 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 747142 00:36:06.698 05:30:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 747142 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 747142' 00:36:06.698 killing process with pid 747142 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 747142 00:36:06.698 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 747142 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:06.958 05:30:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.530 05:30:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:09.530 00:36:09.530 real 0m25.104s 00:36:09.530 user 0m39.467s 00:36:09.530 sys 0m10.583s 00:36:09.530 05:30:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.530 05:30:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:09.530 ************************************ 00:36:09.530 END TEST nvmf_interrupt 00:36:09.530 ************************************ 00:36:09.530 00:36:09.530 real 30m14.750s 00:36:09.530 user 59m33.961s 00:36:09.530 sys 11m35.202s 00:36:09.530 05:30:51 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.530 05:30:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.530 ************************************ 00:36:09.530 END TEST nvmf_tcp 00:36:09.530 ************************************ 00:36:09.530 05:30:51 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:09.530 05:30:51 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:09.530 05:30:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:09.530 05:30:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.530 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:36:09.530 ************************************ 00:36:09.530 START TEST spdkcli_nvmf_tcp 00:36:09.530 ************************************ 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:09.530 * Looking for test storage... 00:36:09.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:09.530 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.531 --rc genhtml_branch_coverage=1 00:36:09.531 --rc genhtml_function_coverage=1 00:36:09.531 --rc genhtml_legend=1 00:36:09.531 --rc geninfo_all_blocks=1 00:36:09.531 --rc geninfo_unexecuted_blocks=1 00:36:09.531 00:36:09.531 ' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.531 --rc genhtml_branch_coverage=1 00:36:09.531 --rc genhtml_function_coverage=1 00:36:09.531 --rc genhtml_legend=1 00:36:09.531 --rc geninfo_all_blocks=1 00:36:09.531 --rc geninfo_unexecuted_blocks=1 00:36:09.531 00:36:09.531 ' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.531 --rc genhtml_branch_coverage=1 00:36:09.531 --rc genhtml_function_coverage=1 00:36:09.531 --rc genhtml_legend=1 00:36:09.531 --rc geninfo_all_blocks=1 00:36:09.531 --rc geninfo_unexecuted_blocks=1 00:36:09.531 00:36:09.531 ' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:09.531 --rc genhtml_branch_coverage=1 00:36:09.531 --rc genhtml_function_coverage=1 00:36:09.531 --rc genhtml_legend=1 00:36:09.531 --rc geninfo_all_blocks=1 00:36:09.531 --rc geninfo_unexecuted_blocks=1 00:36:09.531 00:36:09.531 ' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:09.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=750211 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 750211 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 750211 ']' 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.531 05:30:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.531 [2024-12-09 05:30:51.837771] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:36:09.531 [2024-12-09 05:30:51.837823] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid750211 ] 00:36:09.531 [2024-12-09 05:30:51.928220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:09.531 [2024-12-09 05:30:51.968637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.531 [2024-12-09 05:30:51.968639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:10.469 05:30:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:10.470 05:30:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:10.470 05:30:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:10.470 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.470 05:30:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:10.470 05:30:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:10.470 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:10.470 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:10.470 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:10.470 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:10.470 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:10.470 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:10.470 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:10.470 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:10.470 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:10.470 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:10.470 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:10.470 ' 00:36:13.059 [2024-12-09 05:30:55.451045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.435 [2024-12-09 05:30:56.787437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:16.974 [2024-12-09 05:30:59.275019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:19.512 [2024-12-09 05:31:01.461781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:20.890 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:20.890 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:20.890 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:20.890 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:20.890 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:20.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:20.890 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:20.890 05:31:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.458 05:31:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:21.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:21.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:21.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:21.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:21.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:21.458 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:21.458 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:21.458 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:21.458 ' 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:26.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:26.952 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:26.952 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:26.952 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 750211 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 750211 ']' 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 750211 00:36:26.952 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:27.211 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.211 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 750211 00:36:27.211 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 750211' 00:36:27.212 killing process with pid 750211 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 750211 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 750211 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 750211 ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 750211 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 750211 ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 750211 00:36:27.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (750211) - No such process 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 750211 is not found' 00:36:27.212 Process with pid 750211 is not found 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:27.212 05:31:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:27.471 00:36:27.471 real 0m18.138s 00:36:27.471 user 0m39.708s 00:36:27.471 sys 0m1.085s 00:36:27.471 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.471 05:31:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.471 ************************************ 00:36:27.471 END TEST spdkcli_nvmf_tcp 00:36:27.471 ************************************ 00:36:27.471 05:31:09 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:27.471 05:31:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:27.471 05:31:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.471 05:31:09 -- common/autotest_common.sh@10 -- # set +x 00:36:27.471 ************************************ 00:36:27.471 START TEST nvmf_identify_passthru 00:36:27.471 ************************************ 00:36:27.471 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:27.471 * Looking for test storage... 00:36:27.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:27.471 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:27.471 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:27.471 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.732 05:31:09 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.732 --rc genhtml_branch_coverage=1 00:36:27.732 --rc genhtml_function_coverage=1 00:36:27.732 --rc genhtml_legend=1 00:36:27.732 --rc geninfo_all_blocks=1 00:36:27.732 --rc geninfo_unexecuted_blocks=1 00:36:27.732 00:36:27.732 ' 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.732 --rc genhtml_branch_coverage=1 00:36:27.732 --rc genhtml_function_coverage=1 00:36:27.732 --rc genhtml_legend=1 00:36:27.732 --rc geninfo_all_blocks=1 00:36:27.732 --rc geninfo_unexecuted_blocks=1 00:36:27.732 00:36:27.732 ' 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.732 --rc genhtml_branch_coverage=1 00:36:27.732 --rc genhtml_function_coverage=1 00:36:27.732 --rc genhtml_legend=1 00:36:27.732 --rc geninfo_all_blocks=1 00:36:27.732 --rc geninfo_unexecuted_blocks=1 00:36:27.732 00:36:27.732 ' 00:36:27.732 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.732 --rc genhtml_branch_coverage=1 00:36:27.732 --rc genhtml_function_coverage=1 00:36:27.732 --rc genhtml_legend=1 00:36:27.732 --rc geninfo_all_blocks=1 00:36:27.732 --rc geninfo_unexecuted_blocks=1 00:36:27.732 00:36:27.732 ' 00:36:27.732 05:31:09 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:27.732 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:27.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.733 05:31:09 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.733 05:31:09 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:27.733 05:31:09 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.733 05:31:09 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:27.733 05:31:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.733 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:27.733 05:31:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.733 05:31:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:27.733 05:31:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:27.733 05:31:10 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:27.733 05:31:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:35.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:35.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:35.857 Found net devices under 0000:af:00.0: cvl_0_0 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:35.857 Found net devices under 0000:af:00.1: cvl_0_1 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:35.857 05:31:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:35.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:35.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:36:35.857 00:36:35.857 --- 10.0.0.2 ping statistics --- 00:36:35.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.857 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:35.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:35.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:36:35.857 00:36:35.857 --- 10.0.0.1 ping statistics --- 00:36:35.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.857 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:35.857 05:31:17 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:35.857 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:35.857 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:36:35.858 05:31:17 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:d8:00.0 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:35.858 05:31:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:40.068 05:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN036005WL1P6AGN 00:36:40.068 05:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:36:40.068 05:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:40.068 05:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=758111 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:45.341 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 758111 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 758111 ']' 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.341 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.341 [2024-12-09 05:31:27.109762] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:36:45.341 [2024-12-09 05:31:27.109818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:45.341 [2024-12-09 05:31:27.207296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:45.341 [2024-12-09 05:31:27.250164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:45.341 [2024-12-09 05:31:27.250203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:45.341 [2024-12-09 05:31:27.250218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:45.341 [2024-12-09 05:31:27.250227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:45.341 [2024-12-09 05:31:27.250234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:45.341 [2024-12-09 05:31:27.252015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.341 [2024-12-09 05:31:27.252106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:45.341 [2024-12-09 05:31:27.252230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:45.341 [2024-12-09 05:31:27.252229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:45.617 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.617 INFO: Log level set to 20 00:36:45.617 INFO: Requests: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "method": "nvmf_set_config", 00:36:45.617 "id": 1, 00:36:45.617 "params": { 00:36:45.617 "admin_cmd_passthru": { 00:36:45.617 "identify_ctrlr": true 00:36:45.617 } 00:36:45.617 } 00:36:45.617 } 00:36:45.617 00:36:45.617 INFO: response: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "id": 1, 00:36:45.617 "result": true 00:36:45.617 } 00:36:45.617 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.617 05:31:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.617 05:31:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.617 INFO: Setting log level to 20 00:36:45.617 INFO: Setting log level to 20 00:36:45.617 INFO: Log level set to 20 00:36:45.617 INFO: Log level set to 20 00:36:45.617 INFO: Requests: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "method": "framework_start_init", 00:36:45.617 "id": 1 00:36:45.617 } 00:36:45.617 00:36:45.617 INFO: Requests: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "method": "framework_start_init", 00:36:45.617 "id": 1 00:36:45.617 } 00:36:45.617 00:36:45.617 [2024-12-09 05:31:28.026614] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:45.617 INFO: response: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "id": 1, 00:36:45.617 "result": true 00:36:45.617 } 00:36:45.617 00:36:45.617 INFO: response: 00:36:45.617 { 00:36:45.617 "jsonrpc": "2.0", 00:36:45.617 "id": 1, 00:36:45.617 "result": true 00:36:45.617 } 00:36:45.617 00:36:45.617 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.617 05:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:45.618 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.618 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.618 INFO: Setting log level to 40 00:36:45.618 INFO: Setting log level to 40 00:36:45.618 INFO: Setting log level to 40 00:36:45.618 [2024-12-09 05:31:28.039960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.618 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.618 05:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:45.618 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:45.618 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.876 05:31:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:36:45.876 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.876 05:31:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 Nvme0n1 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 [2024-12-09 05:31:30.980071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.178 05:31:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 [ 00:36:49.178 { 00:36:49.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:49.178 "subtype": "Discovery", 00:36:49.178 "listen_addresses": [], 00:36:49.178 "allow_any_host": true, 00:36:49.178 "hosts": [] 00:36:49.178 }, 00:36:49.178 { 00:36:49.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:49.178 "subtype": "NVMe", 00:36:49.178 "listen_addresses": [ 00:36:49.178 { 00:36:49.178 "trtype": "TCP", 00:36:49.178 "adrfam": "IPv4", 00:36:49.178 "traddr": "10.0.0.2", 00:36:49.178 "trsvcid": "4420" 00:36:49.178 } 00:36:49.178 ], 00:36:49.178 "allow_any_host": true, 00:36:49.178 "hosts": [], 00:36:49.178 "serial_number": "SPDK00000000000001", 00:36:49.178 "model_number": "SPDK bdev Controller", 00:36:49.178 "max_namespaces": 1, 00:36:49.178 "min_cntlid": 1, 00:36:49.178 "max_cntlid": 65519, 00:36:49.178 "namespaces": [ 00:36:49.178 { 00:36:49.178 "nsid": 1, 00:36:49.178 "bdev_name": "Nvme0n1", 00:36:49.178 "name": "Nvme0n1", 00:36:49.178 "nguid": "F87D1AD1E3494F42A18C01FE41DE1F53", 00:36:49.178 "uuid": "f87d1ad1-e349-4f42-a18c-01fe41de1f53" 00:36:49.178 } 00:36:49.178 ] 00:36:49.178 } 00:36:49.178 ] 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN036005WL1P6AGN 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN036005WL1P6AGN '!=' PHLN036005WL1P6AGN ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:49.178 05:31:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:49.178 rmmod nvme_tcp 00:36:49.178 rmmod nvme_fabrics 00:36:49.178 rmmod nvme_keyring 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 758111 ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 758111 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 758111 ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 758111 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.178 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758111 00:36:49.438 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:49.438 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:49.438 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758111' 00:36:49.438 killing process with pid 758111 00:36:49.438 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 758111 00:36:49.438 05:31:31 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 758111 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:51.345 05:31:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.345 05:31:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:51.345 05:31:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.882 05:31:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.882 00:36:53.882 real 0m26.080s 00:36:53.882 user 0m33.937s 00:36:53.882 sys 0m7.815s 00:36:53.882 05:31:35 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.882 05:31:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:53.882 ************************************ 00:36:53.882 END TEST nvmf_identify_passthru 00:36:53.882 ************************************ 00:36:53.882 05:31:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:53.882 05:31:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:53.882 05:31:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.882 05:31:35 -- common/autotest_common.sh@10 -- # set +x 00:36:53.882 ************************************ 00:36:53.882 START TEST nvmf_dif 00:36:53.882 ************************************ 00:36:53.882 05:31:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:53.882 * Looking for test storage... 00:36:53.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.882 --rc genhtml_branch_coverage=1 00:36:53.882 --rc genhtml_function_coverage=1 00:36:53.882 --rc genhtml_legend=1 00:36:53.882 --rc geninfo_all_blocks=1 00:36:53.882 --rc geninfo_unexecuted_blocks=1 00:36:53.882 00:36:53.882 ' 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.882 --rc genhtml_branch_coverage=1 00:36:53.882 --rc genhtml_function_coverage=1 00:36:53.882 --rc genhtml_legend=1 00:36:53.882 --rc geninfo_all_blocks=1 00:36:53.882 --rc geninfo_unexecuted_blocks=1 00:36:53.882 00:36:53.882 ' 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.882 --rc genhtml_branch_coverage=1 00:36:53.882 --rc genhtml_function_coverage=1 00:36:53.882 --rc genhtml_legend=1 00:36:53.882 --rc geninfo_all_blocks=1 00:36:53.882 --rc geninfo_unexecuted_blocks=1 00:36:53.882 00:36:53.882 ' 00:36:53.882 05:31:36 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.882 --rc genhtml_branch_coverage=1 00:36:53.882 --rc genhtml_function_coverage=1 00:36:53.882 --rc genhtml_legend=1 00:36:53.882 --rc geninfo_all_blocks=1 00:36:53.882 --rc geninfo_unexecuted_blocks=1 00:36:53.882 00:36:53.882 ' 00:36:53.882 05:31:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.882 05:31:36 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.882 05:31:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.882 05:31:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.883 05:31:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.883 05:31:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.883 05:31:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:53.883 05:31:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:53.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.883 05:31:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:53.883 05:31:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:53.883 05:31:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:53.883 05:31:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:53.883 05:31:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.883 05:31:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:53.883 05:31:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:53.883 05:31:36 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.883 05:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:02.006 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:02.006 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:02.006 Found net devices under 0000:af:00.0: cvl_0_0 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:02.006 Found net devices under 0000:af:00.1: cvl_0_1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:02.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:37:02.006 00:37:02.006 --- 10.0.0.2 ping statistics --- 00:37:02.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.006 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:02.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:37:02.006 00:37:02.006 --- 10.0.0.1 ping statistics --- 00:37:02.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.006 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:02.006 05:31:43 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:04.542 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:04.542 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:04.542 05:31:46 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.542 05:31:46 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:04.542 05:31:46 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:04.542 05:31:46 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:04.543 05:31:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:04.543 05:31:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=764179 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:04.543 05:31:46 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 764179 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 764179 ']' 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.543 05:31:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.543 [2024-12-09 05:31:46.940777] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:37:04.543 [2024-12-09 05:31:46.940827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.802 [2024-12-09 05:31:47.038816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.802 [2024-12-09 05:31:47.076137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.802 [2024-12-09 05:31:47.076168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.802 [2024-12-09 05:31:47.076177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.802 [2024-12-09 05:31:47.076186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.802 [2024-12-09 05:31:47.076193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.802 [2024-12-09 05:31:47.076783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:05.370 05:31:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.370 05:31:47 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.370 05:31:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:05.370 05:31:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.370 [2024-12-09 05:31:47.806751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.370 05:31:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:05.370 05:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:05.629 ************************************ 00:37:05.629 START TEST fio_dif_1_default 00:37:05.629 ************************************ 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.629 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.629 bdev_null0 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:05.630 [2024-12-09 05:31:47.887098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.630 { 00:37:05.630 "params": { 00:37:05.630 "name": "Nvme$subsystem", 00:37:05.630 "trtype": "$TEST_TRANSPORT", 00:37:05.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.630 "adrfam": "ipv4", 00:37:05.630 "trsvcid": "$NVMF_PORT", 00:37:05.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.630 "hdgst": ${hdgst:-false}, 00:37:05.630 "ddgst": ${ddgst:-false} 00:37:05.630 }, 00:37:05.630 "method": "bdev_nvme_attach_controller" 00:37:05.630 } 00:37:05.630 EOF 00:37:05.630 )") 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:05.630 "params": { 00:37:05.630 "name": "Nvme0", 00:37:05.630 "trtype": "tcp", 00:37:05.630 "traddr": "10.0.0.2", 00:37:05.630 "adrfam": "ipv4", 00:37:05.630 "trsvcid": "4420", 00:37:05.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.630 "hdgst": false, 00:37:05.630 "ddgst": false 00:37:05.630 }, 00:37:05.630 "method": "bdev_nvme_attach_controller" 00:37:05.630 }' 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.630 05:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.888 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:05.888 fio-3.35 00:37:05.888 Starting 1 thread 00:37:18.099 00:37:18.099 filename0: (groupid=0, jobs=1): err= 0: pid=764615: Mon Dec 9 05:31:58 2024 00:37:18.099 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10027msec) 00:37:18.099 slat (nsec): min=5724, max=33005, avg=6056.32, stdev=735.05 00:37:18.099 clat (usec): min=426, max=42502, avg=21225.66, stdev=20549.97 00:37:18.099 lat (usec): min=432, max=42509, avg=21231.71, stdev=20549.98 00:37:18.099 clat percentiles (usec): 00:37:18.099 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 562], 20.00th=[ 603], 00:37:18.099 | 30.00th=[ 611], 40.00th=[ 660], 50.00th=[40633], 60.00th=[41157], 00:37:18.099 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:18.099 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:18.099 | 99.99th=[42730] 00:37:18.099 bw ( KiB/s): min= 672, max= 768, per=99.98%, avg=753.60, stdev=30.22, samples=20 00:37:18.099 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:37:18.099 lat (usec) : 500=8.90%, 750=40.89% 00:37:18.099 lat (msec) : 50=50.21% 00:37:18.099 cpu : usr=86.66%, sys=13.06%, ctx=9, majf=0, minf=0 00:37:18.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:18.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:18.099 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:18.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:18.099 00:37:18.099 Run status group 0 (all jobs): 00:37:18.099 READ: bw=753KiB/s (771kB/s), 753KiB/s-753KiB/s (771kB/s-771kB/s), io=7552KiB (7733kB), run=10027-10027msec 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 00:37:18.099 real 0m11.327s 00:37:18.099 user 0m18.742s 00:37:18.099 sys 0m1.695s 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 ************************************ 00:37:18.099 END TEST fio_dif_1_default 00:37:18.099 ************************************ 00:37:18.099 05:31:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:18.099 05:31:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:18.099 05:31:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 ************************************ 00:37:18.099 START TEST fio_dif_1_multi_subsystems 00:37:18.099 ************************************ 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 bdev_null0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 [2024-12-09 05:31:59.297584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 bdev_null1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.099 { 00:37:18.099 "params": { 00:37:18.099 "name": "Nvme$subsystem", 00:37:18.099 "trtype": "$TEST_TRANSPORT", 00:37:18.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.099 "adrfam": "ipv4", 00:37:18.099 "trsvcid": "$NVMF_PORT", 00:37:18.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.099 "hdgst": ${hdgst:-false}, 00:37:18.099 "ddgst": ${ddgst:-false} 00:37:18.099 }, 00:37:18.099 "method": "bdev_nvme_attach_controller" 00:37:18.099 } 00:37:18.099 EOF 00:37:18.099 )") 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:18.099 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.100 { 00:37:18.100 "params": { 00:37:18.100 "name": "Nvme$subsystem", 00:37:18.100 "trtype": "$TEST_TRANSPORT", 00:37:18.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.100 "adrfam": "ipv4", 00:37:18.100 "trsvcid": "$NVMF_PORT", 00:37:18.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.100 "hdgst": ${hdgst:-false}, 00:37:18.100 "ddgst": ${ddgst:-false} 00:37:18.100 }, 00:37:18.100 "method": "bdev_nvme_attach_controller" 00:37:18.100 } 00:37:18.100 EOF 00:37:18.100 )") 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.100 "params": { 00:37:18.100 "name": "Nvme0", 00:37:18.100 "trtype": "tcp", 00:37:18.100 "traddr": "10.0.0.2", 00:37:18.100 "adrfam": "ipv4", 00:37:18.100 "trsvcid": "4420", 00:37:18.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:18.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:18.100 "hdgst": false, 00:37:18.100 "ddgst": false 00:37:18.100 }, 00:37:18.100 "method": "bdev_nvme_attach_controller" 00:37:18.100 },{ 00:37:18.100 "params": { 00:37:18.100 "name": "Nvme1", 00:37:18.100 "trtype": "tcp", 00:37:18.100 "traddr": "10.0.0.2", 00:37:18.100 "adrfam": "ipv4", 00:37:18.100 "trsvcid": "4420", 00:37:18.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.100 "hdgst": false, 00:37:18.100 "ddgst": false 00:37:18.100 }, 00:37:18.100 "method": "bdev_nvme_attach_controller" 00:37:18.100 }' 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:18.100 05:31:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:18.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:18.100 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:18.100 fio-3.35 00:37:18.100 Starting 2 threads 00:37:30.363 00:37:30.363 filename0: (groupid=0, jobs=1): err= 0: pid=766630: Mon Dec 9 05:32:10 2024 00:37:30.363 read: IOPS=202, BW=810KiB/s (829kB/s)(8112KiB/10019msec) 00:37:30.363 slat (nsec): min=5789, max=29263, avg=6862.45, stdev=2140.90 00:37:30.363 clat (usec): min=386, max=42565, avg=19741.73, stdev=20458.94 00:37:30.363 lat (usec): min=392, max=42571, avg=19748.59, stdev=20458.34 00:37:30.363 clat percentiles (usec): 00:37:30.363 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 429], 00:37:30.363 | 30.00th=[ 529], 40.00th=[ 586], 50.00th=[ 644], 60.00th=[40633], 00:37:30.363 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:37:30.363 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:30.363 | 99.99th=[42730] 00:37:30.363 bw ( KiB/s): min= 768, max= 896, per=67.28%, avg=809.60, stdev=42.93, samples=20 00:37:30.363 iops : min= 192, max= 224, avg=202.40, stdev=10.73, samples=20 00:37:30.363 lat (usec) : 500=28.75%, 750=23.52%, 1000=0.59% 00:37:30.363 lat (msec) : 2=0.20%, 50=46.94% 00:37:30.363 cpu : usr=93.32%, sys=6.43%, ctx=14, majf=0, minf=73 00:37:30.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.363 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.363 filename1: (groupid=0, jobs=1): err= 0: pid=766631: Mon Dec 9 05:32:10 2024 00:37:30.363 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10011msec) 00:37:30.363 slat (nsec): min=5781, max=26252, avg=7523.84, stdev=2588.96 00:37:30.363 clat (usec): min=401, max=42035, avg=40672.79, stdev=3647.15 00:37:30.363 lat (usec): min=408, max=42046, avg=40680.31, stdev=3647.16 00:37:30.363 clat percentiles (usec): 00:37:30.363 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:30.363 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:30.363 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:30.363 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:30.363 | 99.99th=[42206] 00:37:30.363 bw ( KiB/s): min= 384, max= 416, per=32.52%, avg=392.00, stdev=14.22, samples=20 00:37:30.363 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:37:30.363 lat (usec) : 500=0.81% 00:37:30.363 lat (msec) : 50=99.19% 00:37:30.363 cpu : usr=93.75%, sys=6.00%, ctx=14, majf=0, minf=21 00:37:30.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:30.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:30.363 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:30.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:30.363 00:37:30.363 Run status group 0 (all jobs): 00:37:30.363 READ: bw=1203KiB/s (1231kB/s), 393KiB/s-810KiB/s (403kB/s-829kB/s), io=11.8MiB (12.3MB), run=10011-10019msec 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 00:37:30.363 real 0m11.652s 00:37:30.363 user 0m29.339s 00:37:30.363 sys 0m1.629s 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 ************************************ 00:37:30.363 END TEST fio_dif_1_multi_subsystems 00:37:30.363 ************************************ 00:37:30.363 05:32:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:30.363 05:32:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:30.363 05:32:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.363 05:32:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 ************************************ 00:37:30.363 START TEST fio_dif_rand_params 00:37:30.363 ************************************ 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 bdev_null0 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:30.363 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:30.364 [2024-12-09 05:32:11.036258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:30.364 { 00:37:30.364 "params": { 00:37:30.364 "name": "Nvme$subsystem", 00:37:30.364 "trtype": "$TEST_TRANSPORT", 00:37:30.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.364 "adrfam": "ipv4", 00:37:30.364 "trsvcid": "$NVMF_PORT", 00:37:30.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.364 "hdgst": ${hdgst:-false}, 00:37:30.364 "ddgst": ${ddgst:-false} 00:37:30.364 }, 00:37:30.364 "method": "bdev_nvme_attach_controller" 00:37:30.364 } 00:37:30.364 EOF 00:37:30.364 )") 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:30.364 "params": { 00:37:30.364 "name": "Nvme0", 00:37:30.364 "trtype": "tcp", 00:37:30.364 "traddr": "10.0.0.2", 00:37:30.364 "adrfam": "ipv4", 00:37:30.364 "trsvcid": "4420", 00:37:30.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.364 "hdgst": false, 00:37:30.364 "ddgst": false 00:37:30.364 }, 00:37:30.364 "method": "bdev_nvme_attach_controller" 00:37:30.364 }' 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:30.364 05:32:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.364 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:30.364 ... 00:37:30.364 fio-3.35 00:37:30.364 Starting 3 threads 00:37:34.599 00:37:34.599 filename0: (groupid=0, jobs=1): err= 0: pid=768632: Mon Dec 9 05:32:17 2024 00:37:34.599 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(186MiB/5004msec) 00:37:34.599 slat (nsec): min=6025, max=33847, avg=12814.56, stdev=4790.09 00:37:34.599 clat (usec): min=3413, max=51055, avg=10099.47, stdev=8079.49 00:37:34.599 lat (usec): min=3419, max=51066, avg=10112.29, stdev=8079.59 00:37:34.599 clat percentiles (usec): 00:37:34.599 | 1.00th=[ 3884], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7570], 00:37:34.599 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:37:34.599 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[11469], 00:37:34.599 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:37:34.599 | 99.99th=[51119] 00:37:34.599 bw ( KiB/s): min=18944, max=48896, per=31.95%, avg=37939.20, stdev=8315.34, samples=10 00:37:34.599 iops : min= 148, max= 382, avg=296.40, stdev=64.96, samples=10 00:37:34.599 lat (msec) : 4=1.15%, 10=85.04%, 20=9.77%, 50=3.23%, 100=0.81% 00:37:34.599 cpu : usr=94.14%, sys=5.56%, ctx=7, majf=0, minf=50 00:37:34.599 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 issued rwts: total=1484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.599 filename0: (groupid=0, jobs=1): err= 0: pid=768633: Mon Dec 9 05:32:17 2024 00:37:34.599 read: IOPS=322, BW=40.3MiB/s (42.3MB/s)(203MiB/5042msec) 00:37:34.599 slat (nsec): min=5986, max=36045, avg=12717.31, stdev=4962.84 00:37:34.599 clat (usec): min=3247, max=51633, avg=9257.10, stdev=5171.95 00:37:34.599 lat (usec): min=3253, max=51641, avg=9269.81, stdev=5172.20 00:37:34.599 clat percentiles (usec): 00:37:34.599 | 1.00th=[ 3523], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 6587], 00:37:34.599 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:37:34.599 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[11863], 00:37:34.599 | 99.00th=[46400], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:37:34.599 | 99.99th=[51643] 00:37:34.599 bw ( KiB/s): min=37376, max=50944, per=35.04%, avg=41600.00, stdev=4349.91, samples=10 00:37:34.599 iops : min= 292, max= 398, avg=325.00, stdev=33.98, samples=10 00:37:34.599 lat (msec) : 4=3.56%, 10=65.40%, 20=29.63%, 50=0.86%, 100=0.55% 00:37:34.599 cpu : usr=94.51%, sys=5.18%, ctx=8, majf=0, minf=38 00:37:34.599 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 issued rwts: total=1627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.599 filename0: (groupid=0, jobs=1): err= 0: pid=768634: Mon Dec 9 05:32:17 2024 00:37:34.599 read: IOPS=313, BW=39.1MiB/s (41.0MB/s)(196MiB/5002msec) 00:37:34.599 slat (nsec): min=6282, max=89639, avg=15164.49, stdev=5662.60 00:37:34.599 clat (usec): min=3683, max=51183, avg=9563.80, stdev=6739.87 00:37:34.599 lat (usec): min=3691, max=51204, avg=9578.97, stdev=6739.82 00:37:34.599 clat percentiles (usec): 00:37:34.599 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 7046], 00:37:34.599 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:37:34.599 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:37:34.599 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:37:34.599 | 99.99th=[51119] 00:37:34.599 bw ( KiB/s): min=31232, max=51200, per=33.72%, avg=40038.40, stdev=6124.06, samples=10 00:37:34.599 iops : min= 244, max= 400, avg=312.80, stdev=47.84, samples=10 00:37:34.599 lat (msec) : 4=0.70%, 10=80.65%, 20=15.96%, 50=2.23%, 100=0.45% 00:37:34.599 cpu : usr=94.66%, sys=5.00%, ctx=8, majf=0, minf=82 00:37:34.599 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.599 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:34.599 00:37:34.599 Run status group 0 (all jobs): 00:37:34.599 READ: bw=116MiB/s (122MB/s), 37.1MiB/s-40.3MiB/s (38.9MB/s-42.3MB/s), io=585MiB (613MB), run=5002-5042msec 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 bdev_null0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 [2024-12-09 05:32:17.279363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 bdev_null1 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:34.859 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:34.860 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.860 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.120 bdev_null2 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:35.120 { 00:37:35.120 "params": { 00:37:35.120 "name": "Nvme$subsystem", 00:37:35.120 "trtype": "$TEST_TRANSPORT", 00:37:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.120 "adrfam": "ipv4", 00:37:35.120 "trsvcid": "$NVMF_PORT", 00:37:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.120 "hdgst": ${hdgst:-false}, 00:37:35.120 "ddgst": ${ddgst:-false} 00:37:35.120 }, 00:37:35.120 "method": "bdev_nvme_attach_controller" 00:37:35.120 } 00:37:35.120 EOF 00:37:35.120 )") 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:35.120 { 00:37:35.120 "params": { 00:37:35.120 "name": "Nvme$subsystem", 00:37:35.120 "trtype": "$TEST_TRANSPORT", 00:37:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.120 "adrfam": "ipv4", 00:37:35.120 "trsvcid": "$NVMF_PORT", 00:37:35.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.120 "hdgst": ${hdgst:-false}, 00:37:35.120 "ddgst": ${ddgst:-false} 00:37:35.120 }, 00:37:35.120 "method": "bdev_nvme_attach_controller" 00:37:35.120 } 00:37:35.120 EOF 00:37:35.120 )") 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:35.120 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:35.120 { 00:37:35.120 "params": { 00:37:35.120 "name": "Nvme$subsystem", 00:37:35.120 "trtype": "$TEST_TRANSPORT", 00:37:35.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.120 "adrfam": "ipv4", 00:37:35.120 "trsvcid": "$NVMF_PORT", 00:37:35.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.121 "hdgst": ${hdgst:-false}, 00:37:35.121 "ddgst": ${ddgst:-false} 00:37:35.121 }, 00:37:35.121 "method": "bdev_nvme_attach_controller" 00:37:35.121 } 00:37:35.121 EOF 00:37:35.121 )") 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:35.121 "params": { 00:37:35.121 "name": "Nvme0", 00:37:35.121 "trtype": "tcp", 00:37:35.121 "traddr": "10.0.0.2", 00:37:35.121 "adrfam": "ipv4", 00:37:35.121 "trsvcid": "4420", 00:37:35.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:35.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:35.121 "hdgst": false, 00:37:35.121 "ddgst": false 00:37:35.121 }, 00:37:35.121 "method": "bdev_nvme_attach_controller" 00:37:35.121 },{ 00:37:35.121 "params": { 00:37:35.121 "name": "Nvme1", 00:37:35.121 "trtype": "tcp", 00:37:35.121 "traddr": "10.0.0.2", 00:37:35.121 "adrfam": "ipv4", 00:37:35.121 "trsvcid": "4420", 00:37:35.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.121 "hdgst": false, 00:37:35.121 "ddgst": false 00:37:35.121 }, 00:37:35.121 "method": "bdev_nvme_attach_controller" 00:37:35.121 },{ 00:37:35.121 "params": { 00:37:35.121 "name": "Nvme2", 00:37:35.121 "trtype": "tcp", 00:37:35.121 "traddr": "10.0.0.2", 00:37:35.121 "adrfam": "ipv4", 00:37:35.121 "trsvcid": "4420", 00:37:35.121 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:35.121 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:35.121 "hdgst": false, 00:37:35.121 "ddgst": false 00:37:35.121 }, 00:37:35.121 "method": "bdev_nvme_attach_controller" 00:37:35.121 }' 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:35.121 05:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:35.381 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:35.381 ... 00:37:35.381 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:35.381 ... 00:37:35.381 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:35.381 ... 00:37:35.381 fio-3.35 00:37:35.381 Starting 24 threads 00:37:47.596 00:37:47.596 filename0: (groupid=0, jobs=1): err= 0: pid=769841: Mon Dec 9 05:32:28 2024 00:37:47.596 read: IOPS=647, BW=2589KiB/s (2651kB/s)(25.3MiB/10013msec) 00:37:47.596 slat (nsec): min=6096, max=97065, avg=19910.48, stdev=14228.11 00:37:47.596 clat (usec): min=1267, max=28336, avg=24572.00, stdev=2754.39 00:37:47.596 lat (usec): min=1281, max=28357, avg=24591.91, stdev=2754.49 00:37:47.596 clat percentiles (usec): 00:37:47.596 | 1.00th=[ 5997], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:47.596 | 30.00th=[24773], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.596 | 70.00th=[25035], 80.00th=[25035], 90.00th=[26084], 95.00th=[26346], 00:37:47.596 | 99.00th=[27132], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:37:47.596 | 99.99th=[28443] 00:37:47.596 bw ( KiB/s): min= 2432, max= 3328, per=4.22%, avg=2585.60, stdev=188.49, samples=20 00:37:47.596 iops : min= 608, max= 832, avg=646.40, stdev=47.12, samples=20 00:37:47.596 lat (msec) : 2=0.74%, 10=0.77%, 20=0.68%, 50=97.81% 00:37:47.596 cpu : usr=97.65%, sys=1.90%, ctx=44, majf=0, minf=9 00:37:47.596 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:47.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.596 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.596 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.596 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769842: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=637, BW=2550KiB/s (2611kB/s)(24.9MiB/10013msec) 00:37:47.597 slat (usec): min=7, max=101, avg=45.76, stdev=16.62 00:37:47.597 clat (usec): min=17183, max=29886, avg=24717.35, stdev=782.58 00:37:47.597 lat (usec): min=17224, max=29903, avg=24763.11, stdev=782.21 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23725], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.597 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.597 | 99.00th=[26870], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:37:47.597 | 99.99th=[30016] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.53, stdev=58.73, samples=19 00:37:47.597 iops : min= 608, max= 672, avg=636.63, stdev=14.68, samples=19 00:37:47.597 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.597 cpu : usr=97.78%, sys=1.71%, ctx=60, majf=0, minf=9 00:37:47.597 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769843: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=637, BW=2552KiB/s (2613kB/s)(24.9MiB/10007msec) 00:37:47.597 slat (nsec): min=6425, max=64767, avg=17608.91, stdev=5716.38 00:37:47.597 clat (usec): min=6989, max=45860, avg=24931.12, stdev=1818.25 00:37:47.597 lat (usec): min=7002, max=45877, avg=24948.73, stdev=1817.93 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:37:47.597 | 30.00th=[24773], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.597 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26084], 95.00th=[26346], 00:37:47.597 | 99.00th=[27395], 99.50th=[27657], 99.90th=[45876], 99.95th=[45876], 00:37:47.597 | 99.99th=[45876] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2672, per=4.15%, avg=2540.00, stdev=62.24, samples=19 00:37:47.597 iops : min= 608, max= 668, avg=635.00, stdev=15.56, samples=19 00:37:47.597 lat (msec) : 10=0.28%, 20=0.41%, 50=99.31% 00:37:47.597 cpu : usr=96.73%, sys=2.42%, ctx=149, majf=0, minf=9 00:37:47.597 IO depths : 1=4.9%, 2=11.1%, 4=24.9%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769844: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10012msec) 00:37:47.597 slat (nsec): min=6862, max=73170, avg=23899.83, stdev=13113.32 00:37:47.597 clat (usec): min=15466, max=33398, avg=24885.59, stdev=998.06 00:37:47.597 lat (usec): min=15481, max=33419, avg=24909.49, stdev=998.26 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:37:47.597 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26084], 95.00th=[26346], 00:37:47.597 | 99.00th=[27395], 99.50th=[28705], 99.90th=[33424], 99.95th=[33424], 00:37:47.597 | 99.99th=[33424] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2547.20, stdev=70.91, samples=20 00:37:47.597 iops : min= 608, max= 672, avg=636.80, stdev=17.73, samples=20 00:37:47.597 lat (msec) : 20=0.53%, 50=99.47% 00:37:47.597 cpu : usr=97.81%, sys=1.80%, ctx=13, majf=0, minf=9 00:37:47.597 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769845: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10020msec) 00:37:47.597 slat (nsec): min=11723, max=99685, avg=44008.01, stdev=18083.46 00:37:47.597 clat (usec): min=10659, max=28263, avg=24697.05, stdev=1029.59 00:37:47.597 lat (usec): min=10675, max=28289, avg=24741.06, stdev=1030.25 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23725], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.597 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.597 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27919], 99.95th=[28181], 00:37:47.597 | 99.99th=[28181] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2553.60, stdev=50.44, samples=20 00:37:47.597 iops : min= 608, max= 672, avg=638.40, stdev=12.61, samples=20 00:37:47.597 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.597 cpu : usr=97.82%, sys=1.67%, ctx=57, majf=0, minf=9 00:37:47.597 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769846: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=637, BW=2550KiB/s (2611kB/s)(24.9MiB/10013msec) 00:37:47.597 slat (usec): min=7, max=110, avg=50.76, stdev=17.14 00:37:47.597 clat (usec): min=17227, max=29924, avg=24636.94, stdev=796.80 00:37:47.597 lat (usec): min=17272, max=29936, avg=24687.70, stdev=797.79 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23462], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.597 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24511], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:37:47.597 | 99.00th=[26870], 99.50th=[27395], 99.90th=[29754], 99.95th=[30016], 00:37:47.597 | 99.99th=[30016] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.53, stdev=58.73, samples=19 00:37:47.597 iops : min= 608, max= 672, avg=636.63, stdev=14.68, samples=19 00:37:47.597 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.597 cpu : usr=98.12%, sys=1.47%, ctx=26, majf=0, minf=9 00:37:47.597 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769847: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=638, BW=2554KiB/s (2616kB/s)(25.0MiB/10007msec) 00:37:47.597 slat (usec): min=4, max=104, avg=50.74, stdev=18.21 00:37:47.597 clat (usec): min=14603, max=51220, avg=24619.85, stdev=1502.14 00:37:47.597 lat (usec): min=14612, max=51233, avg=24670.58, stdev=1503.08 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[18482], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:47.597 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.597 | 99.00th=[26870], 99.50th=[31065], 99.90th=[42206], 99.95th=[42206], 00:37:47.597 | 99.99th=[51119] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2549.32, stdev=60.09, samples=19 00:37:47.597 iops : min= 608, max= 672, avg=637.32, stdev=15.01, samples=19 00:37:47.597 lat (msec) : 20=1.31%, 50=98.65%, 100=0.03% 00:37:47.597 cpu : usr=98.05%, sys=1.53%, ctx=43, majf=0, minf=9 00:37:47.597 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename0: (groupid=0, jobs=1): err= 0: pid=769848: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10019msec) 00:37:47.597 slat (usec): min=7, max=100, avg=37.41, stdev=19.76 00:37:47.597 clat (usec): min=10653, max=28290, avg=24757.72, stdev=1027.48 00:37:47.597 lat (usec): min=10670, max=28314, avg=24795.12, stdev=1027.86 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24249], 00:37:47.597 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:37:47.597 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.597 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:37:47.597 | 99.99th=[28181] 00:37:47.597 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2553.60, stdev=50.44, samples=20 00:37:47.597 iops : min= 608, max= 672, avg=638.40, stdev=12.61, samples=20 00:37:47.597 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.597 cpu : usr=98.03%, sys=1.56%, ctx=64, majf=0, minf=9 00:37:47.597 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.597 issued rwts: total=6400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.597 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.597 filename1: (groupid=0, jobs=1): err= 0: pid=769849: Mon Dec 9 05:32:28 2024 00:37:47.597 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10010msec) 00:37:47.597 slat (nsec): min=4273, max=67700, avg=16337.47, stdev=5815.17 00:37:47.597 clat (usec): min=16580, max=31068, avg=24945.96, stdev=836.38 00:37:47.597 lat (usec): min=16595, max=31080, avg=24962.29, stdev=835.92 00:37:47.597 clat percentiles (usec): 00:37:47.597 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:37:47.597 | 30.00th=[24773], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.597 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26084], 95.00th=[26346], 00:37:47.597 | 99.00th=[27395], 99.50th=[27657], 99.90th=[31065], 99.95th=[31065], 00:37:47.597 | 99.99th=[31065] 00:37:47.597 bw ( KiB/s): min= 2427, max= 2688, per=4.16%, avg=2546.26, stdev=59.28, samples=19 00:37:47.598 iops : min= 606, max= 672, avg=636.53, stdev=14.90, samples=19 00:37:47.598 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.598 cpu : usr=97.49%, sys=1.84%, ctx=85, majf=0, minf=9 00:37:47.598 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769850: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10012msec) 00:37:47.598 slat (nsec): min=6606, max=80916, avg=26195.90, stdev=16750.93 00:37:47.598 clat (usec): min=15465, max=33018, avg=24815.74, stdev=921.32 00:37:47.598 lat (usec): min=15478, max=33042, avg=24841.93, stdev=923.60 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:47.598 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:37:47.598 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.598 | 99.00th=[27132], 99.50th=[27919], 99.90th=[32900], 99.95th=[32900], 00:37:47.598 | 99.99th=[32900] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2547.20, stdev=70.72, samples=20 00:37:47.598 iops : min= 608, max= 672, avg=636.80, stdev=17.68, samples=20 00:37:47.598 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.598 cpu : usr=97.85%, sys=1.72%, ctx=35, majf=0, minf=9 00:37:47.598 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769851: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=637, BW=2552KiB/s (2613kB/s)(24.9MiB/10007msec) 00:37:47.598 slat (usec): min=6, max=103, avg=33.76, stdev=21.14 00:37:47.598 clat (usec): min=5906, max=45928, avg=24780.45, stdev=1862.71 00:37:47.598 lat (usec): min=5916, max=45946, avg=24814.21, stdev=1863.51 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[21103], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.598 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:37:47.598 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:37:47.598 | 99.00th=[27657], 99.50th=[31327], 99.90th=[45876], 99.95th=[45876], 00:37:47.598 | 99.99th=[45876] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.74, stdev=70.86, samples=19 00:37:47.598 iops : min= 608, max= 672, avg=636.68, stdev=17.71, samples=19 00:37:47.598 lat (msec) : 10=0.25%, 20=0.60%, 50=99.15% 00:37:47.598 cpu : usr=97.85%, sys=1.69%, ctx=115, majf=0, minf=9 00:37:47.598 IO depths : 1=5.5%, 2=11.6%, 4=24.5%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769852: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10020msec) 00:37:47.598 slat (usec): min=10, max=115, avg=45.91, stdev=20.80 00:37:47.598 clat (usec): min=10669, max=28366, avg=24678.22, stdev=1028.22 00:37:47.598 lat (usec): min=10691, max=28402, avg=24724.13, stdev=1029.30 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[23725], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.598 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.598 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.598 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27919], 99.95th=[28181], 00:37:47.598 | 99.99th=[28443] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2553.60, stdev=50.44, samples=20 00:37:47.598 iops : min= 608, max= 672, avg=638.40, stdev=12.61, samples=20 00:37:47.598 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.598 cpu : usr=98.04%, sys=1.56%, ctx=18, majf=0, minf=9 00:37:47.598 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769853: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10011msec) 00:37:47.598 slat (nsec): min=6239, max=80545, avg=26269.06, stdev=16679.87 00:37:47.598 clat (usec): min=17277, max=31051, avg=24818.93, stdev=838.61 00:37:47.598 lat (usec): min=17285, max=31071, avg=24845.20, stdev=840.97 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:47.598 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:37:47.598 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.598 | 99.00th=[27132], 99.50th=[27919], 99.90th=[31065], 99.95th=[31065], 00:37:47.598 | 99.99th=[31065] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2560, per=4.16%, avg=2546.53, stdev=40.36, samples=19 00:37:47.598 iops : min= 608, max= 640, avg=636.63, stdev=10.09, samples=19 00:37:47.598 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.598 cpu : usr=97.43%, sys=1.94%, ctx=138, majf=0, minf=9 00:37:47.598 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769854: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10019msec) 00:37:47.598 slat (usec): min=6, max=100, avg=25.57, stdev=18.78 00:37:47.598 clat (usec): min=10740, max=28243, avg=24850.84, stdev=1031.52 00:37:47.598 lat (usec): min=10757, max=28264, avg=24876.41, stdev=1030.66 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:47.598 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.598 | 70.00th=[25035], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:37:47.598 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:37:47.598 | 99.99th=[28181] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2553.60, stdev=50.44, samples=20 00:37:47.598 iops : min= 608, max= 672, avg=638.40, stdev=12.61, samples=20 00:37:47.598 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.598 cpu : usr=98.00%, sys=1.59%, ctx=31, majf=0, minf=9 00:37:47.598 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769855: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=639, BW=2559KiB/s (2620kB/s)(25.0MiB/10008msec) 00:37:47.598 slat (usec): min=6, max=102, avg=48.12, stdev=19.76 00:37:47.598 clat (usec): min=7452, max=41748, avg=24548.81, stdev=1665.72 00:37:47.598 lat (usec): min=7498, max=41768, avg=24596.93, stdev=1668.53 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[17171], 5.00th=[23462], 10.00th=[23987], 20.00th=[24249], 00:37:47.598 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:37:47.598 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.598 | 99.00th=[27132], 99.50th=[29754], 99.90th=[41681], 99.95th=[41681], 00:37:47.598 | 99.99th=[41681] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.18%, avg=2559.16, stdev=70.04, samples=19 00:37:47.598 iops : min= 608, max= 672, avg=639.79, stdev=17.51, samples=19 00:37:47.598 lat (msec) : 10=0.06%, 20=1.45%, 50=98.48% 00:37:47.598 cpu : usr=98.07%, sys=1.55%, ctx=15, majf=0, minf=9 00:37:47.598 IO depths : 1=6.0%, 2=12.0%, 4=24.1%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename1: (groupid=0, jobs=1): err= 0: pid=769856: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=637, BW=2552KiB/s (2613kB/s)(24.9MiB/10007msec) 00:37:47.598 slat (nsec): min=6478, max=68532, avg=17975.00, stdev=6044.54 00:37:47.598 clat (usec): min=7235, max=58766, avg=24917.11, stdev=1804.03 00:37:47.598 lat (usec): min=7249, max=58784, avg=24935.09, stdev=1803.72 00:37:47.598 clat percentiles (usec): 00:37:47.598 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:37:47.598 | 30.00th=[24773], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.598 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26084], 95.00th=[26346], 00:37:47.598 | 99.00th=[27395], 99.50th=[27657], 99.90th=[45876], 99.95th=[45876], 00:37:47.598 | 99.99th=[58983] 00:37:47.598 bw ( KiB/s): min= 2432, max= 2688, per=4.15%, avg=2540.00, stdev=63.82, samples=19 00:37:47.598 iops : min= 608, max= 672, avg=635.00, stdev=15.95, samples=19 00:37:47.598 lat (msec) : 10=0.28%, 20=0.38%, 50=99.31%, 100=0.03% 00:37:47.598 cpu : usr=97.50%, sys=1.77%, ctx=109, majf=0, minf=9 00:37:47.598 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:47.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.598 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.598 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.598 filename2: (groupid=0, jobs=1): err= 0: pid=769857: Mon Dec 9 05:32:28 2024 00:37:47.598 read: IOPS=636, BW=2547KiB/s (2608kB/s)(24.9MiB/10001msec) 00:37:47.598 slat (usec): min=7, max=110, avg=51.67, stdev=17.18 00:37:47.598 clat (usec): min=17204, max=42061, avg=24675.84, stdev=1196.68 00:37:47.598 lat (usec): min=17247, max=42083, avg=24727.51, stdev=1196.41 00:37:47.598 clat percentiles (usec): 00:37:47.599 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[27395], 99.50th=[27919], 99.90th=[42206], 99.95th=[42206], 00:37:47.599 | 99.99th=[42206] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.53, stdev=72.59, samples=19 00:37:47.599 iops : min= 608, max= 672, avg=636.63, stdev=18.15, samples=19 00:37:47.599 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.599 cpu : usr=97.28%, sys=1.98%, ctx=84, majf=0, minf=9 00:37:47.599 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769858: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=637, BW=2550KiB/s (2611kB/s)(24.9MiB/10007msec) 00:37:47.599 slat (usec): min=6, max=110, avg=49.33, stdev=19.17 00:37:47.599 clat (usec): min=7383, max=46053, avg=24636.89, stdev=1751.15 00:37:47.599 lat (usec): min=7396, max=46069, avg=24686.22, stdev=1751.73 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[21890], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24511], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[27132], 99.50th=[30016], 99.90th=[45876], 99.95th=[45876], 00:37:47.599 | 99.99th=[45876] 00:37:47.599 bw ( KiB/s): min= 2356, max= 2688, per=4.16%, avg=2545.05, stdev=82.19, samples=19 00:37:47.599 iops : min= 589, max= 672, avg=636.26, stdev=20.55, samples=19 00:37:47.599 lat (msec) : 10=0.25%, 20=0.44%, 50=99.31% 00:37:47.599 cpu : usr=97.46%, sys=1.91%, ctx=78, majf=0, minf=9 00:37:47.599 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769859: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=638, BW=2554KiB/s (2616kB/s)(25.0MiB/10013msec) 00:37:47.599 slat (nsec): min=4868, max=92752, avg=43669.02, stdev=16667.96 00:37:47.599 clat (usec): min=14925, max=36225, avg=24671.15, stdev=1405.74 00:37:47.599 lat (usec): min=14932, max=36244, avg=24714.81, stdev=1407.20 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[18220], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[29754], 99.50th=[32900], 99.90th=[35914], 99.95th=[35914], 00:37:47.599 | 99.99th=[36439] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2550.74, stdev=65.61, samples=19 00:37:47.599 iops : min= 608, max= 672, avg=637.68, stdev=16.40, samples=19 00:37:47.599 lat (msec) : 20=1.19%, 50=98.81% 00:37:47.599 cpu : usr=98.12%, sys=1.50%, ctx=37, majf=0, minf=9 00:37:47.599 IO depths : 1=5.7%, 2=11.4%, 4=23.4%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769860: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=637, BW=2550KiB/s (2611kB/s)(24.9MiB/10013msec) 00:37:47.599 slat (nsec): min=6216, max=94073, avg=45609.74, stdev=14588.57 00:37:47.599 clat (usec): min=17420, max=30251, avg=24718.17, stdev=791.65 00:37:47.599 lat (usec): min=17470, max=30268, avg=24763.78, stdev=791.24 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[23462], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[26870], 99.50th=[27395], 99.90th=[30278], 99.95th=[30278], 00:37:47.599 | 99.99th=[30278] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.53, stdev=58.73, samples=19 00:37:47.599 iops : min= 608, max= 672, avg=636.63, stdev=14.68, samples=19 00:37:47.599 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.599 cpu : usr=97.91%, sys=1.68%, ctx=29, majf=0, minf=9 00:37:47.599 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769861: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10020msec) 00:37:47.599 slat (usec): min=6, max=106, avg=40.27, stdev=20.34 00:37:47.599 clat (usec): min=10617, max=29270, avg=24742.92, stdev=1157.66 00:37:47.599 lat (usec): min=10636, max=29311, avg=24783.19, stdev=1158.71 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[21103], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24773], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26346], 00:37:47.599 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28705], 99.95th=[28967], 00:37:47.599 | 99.99th=[29230] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2553.60, stdev=50.44, samples=20 00:37:47.599 iops : min= 608, max= 672, avg=638.40, stdev=12.61, samples=20 00:37:47.599 lat (msec) : 20=0.50%, 50=99.50% 00:37:47.599 cpu : usr=97.99%, sys=1.62%, ctx=13, majf=0, minf=9 00:37:47.599 IO depths : 1=5.3%, 2=11.5%, 4=24.7%, 8=51.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769862: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10011msec) 00:37:47.599 slat (nsec): min=4324, max=90026, avg=46106.65, stdev=14838.37 00:37:47.599 clat (usec): min=17306, max=29286, avg=24706.22, stdev=781.70 00:37:47.599 lat (usec): min=17355, max=29305, avg=24752.33, stdev=781.70 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[23462], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[26870], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:37:47.599 | 99.99th=[29230] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2546.74, stdev=58.30, samples=19 00:37:47.599 iops : min= 608, max= 672, avg=636.68, stdev=14.58, samples=19 00:37:47.599 lat (msec) : 20=0.25%, 50=99.75% 00:37:47.599 cpu : usr=97.91%, sys=1.68%, ctx=34, majf=0, minf=9 00:37:47.599 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769863: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=637, BW=2551KiB/s (2612kB/s)(24.9MiB/10011msec) 00:37:47.599 slat (nsec): min=6127, max=63071, avg=18638.99, stdev=11315.49 00:37:47.599 clat (usec): min=15504, max=32866, avg=24949.65, stdev=1069.86 00:37:47.599 lat (usec): min=15518, max=32888, avg=24968.29, stdev=1069.42 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[20841], 5.00th=[23987], 10.00th=[24511], 20.00th=[24511], 00:37:47.599 | 30.00th=[24773], 40.00th=[24773], 50.00th=[24773], 60.00th=[24773], 00:37:47.599 | 70.00th=[25035], 80.00th=[25297], 90.00th=[26084], 95.00th=[26346], 00:37:47.599 | 99.00th=[28181], 99.50th=[29754], 99.90th=[32637], 99.95th=[32900], 00:37:47.599 | 99.99th=[32900] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2547.20, stdev=71.10, samples=20 00:37:47.599 iops : min= 608, max= 672, avg=636.80, stdev=17.78, samples=20 00:37:47.599 lat (msec) : 20=0.75%, 50=99.25% 00:37:47.599 cpu : usr=97.86%, sys=1.64%, ctx=72, majf=0, minf=9 00:37:47.599 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:47.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.599 issued rwts: total=6384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.599 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.599 filename2: (groupid=0, jobs=1): err= 0: pid=769864: Mon Dec 9 05:32:28 2024 00:37:47.599 read: IOPS=636, BW=2548KiB/s (2609kB/s)(24.9MiB/10008msec) 00:37:47.599 slat (nsec): min=5975, max=89846, avg=39612.21, stdev=18058.00 00:37:47.599 clat (usec): min=7421, max=46407, avg=24771.91, stdev=1707.50 00:37:47.599 lat (usec): min=7441, max=46439, avg=24811.52, stdev=1707.63 00:37:47.599 clat percentiles (usec): 00:37:47.599 | 1.00th=[23462], 5.00th=[23987], 10.00th=[23987], 20.00th=[24249], 00:37:47.599 | 30.00th=[24511], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:37:47.599 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26084], 00:37:47.599 | 99.00th=[27657], 99.50th=[30016], 99.90th=[46400], 99.95th=[46400], 00:37:47.599 | 99.99th=[46400] 00:37:47.599 bw ( KiB/s): min= 2432, max= 2688, per=4.15%, avg=2542.53, stdev=71.40, samples=19 00:37:47.599 iops : min= 608, max= 672, avg=635.63, stdev=17.85, samples=19 00:37:47.599 lat (msec) : 10=0.25%, 20=0.25%, 50=99.50% 00:37:47.599 cpu : usr=98.03%, sys=1.53%, ctx=58, majf=0, minf=9 00:37:47.599 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:47.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.600 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:47.600 issued rwts: total=6374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:47.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:47.600 00:37:47.600 Run status group 0 (all jobs): 00:37:47.600 READ: bw=59.8MiB/s (62.7MB/s), 2547KiB/s-2589KiB/s (2608kB/s-2651kB/s), io=599MiB (628MB), run=10001-10020msec 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:47.600 05:32:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 bdev_null0 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 [2024-12-09 05:32:29.067422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 bdev_null1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:47.600 { 00:37:47.600 "params": { 00:37:47.600 "name": "Nvme$subsystem", 00:37:47.600 "trtype": "$TEST_TRANSPORT", 00:37:47.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:47.600 "adrfam": "ipv4", 00:37:47.600 "trsvcid": "$NVMF_PORT", 00:37:47.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:47.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:47.600 "hdgst": ${hdgst:-false}, 00:37:47.600 "ddgst": ${ddgst:-false} 00:37:47.600 }, 00:37:47.600 "method": "bdev_nvme_attach_controller" 00:37:47.600 } 00:37:47.600 EOF 00:37:47.600 )") 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:47.600 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:47.601 { 00:37:47.601 "params": { 00:37:47.601 "name": "Nvme$subsystem", 00:37:47.601 "trtype": "$TEST_TRANSPORT", 00:37:47.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:47.601 "adrfam": "ipv4", 00:37:47.601 "trsvcid": "$NVMF_PORT", 00:37:47.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:47.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:47.601 "hdgst": ${hdgst:-false}, 00:37:47.601 "ddgst": ${ddgst:-false} 00:37:47.601 }, 00:37:47.601 "method": "bdev_nvme_attach_controller" 00:37:47.601 } 00:37:47.601 EOF 00:37:47.601 )") 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:47.601 "params": { 00:37:47.601 "name": "Nvme0", 00:37:47.601 "trtype": "tcp", 00:37:47.601 "traddr": "10.0.0.2", 00:37:47.601 "adrfam": "ipv4", 00:37:47.601 "trsvcid": "4420", 00:37:47.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.601 "hdgst": false, 00:37:47.601 "ddgst": false 00:37:47.601 }, 00:37:47.601 "method": "bdev_nvme_attach_controller" 00:37:47.601 },{ 00:37:47.601 "params": { 00:37:47.601 "name": "Nvme1", 00:37:47.601 "trtype": "tcp", 00:37:47.601 "traddr": "10.0.0.2", 00:37:47.601 "adrfam": "ipv4", 00:37:47.601 "trsvcid": "4420", 00:37:47.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:47.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:47.601 "hdgst": false, 00:37:47.601 "ddgst": false 00:37:47.601 }, 00:37:47.601 "method": "bdev_nvme_attach_controller" 00:37:47.601 }' 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:47.601 05:32:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.601 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:47.601 ... 00:37:47.601 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:47.601 ... 00:37:47.601 fio-3.35 00:37:47.601 Starting 4 threads 00:37:52.875 00:37:52.875 filename0: (groupid=0, jobs=1): err= 0: pid=771874: Mon Dec 9 05:32:35 2024 00:37:52.875 read: IOPS=2750, BW=21.5MiB/s (22.5MB/s)(107MiB/5002msec) 00:37:52.875 slat (nsec): min=6082, max=73474, avg=16904.22, stdev=11067.44 00:37:52.875 clat (usec): min=878, max=5180, avg=2856.23, stdev=425.91 00:37:52.875 lat (usec): min=901, max=5188, avg=2873.13, stdev=426.44 00:37:52.875 clat percentiles (usec): 00:37:52.875 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2540], 00:37:52.875 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:37:52.875 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3556], 00:37:52.875 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 5014], 00:37:52.875 | 99.99th=[ 5145] 00:37:52.875 bw ( KiB/s): min=21184, max=22672, per=25.69%, avg=21975.11, stdev=602.56, samples=9 00:37:52.875 iops : min= 2648, max= 2834, avg=2746.89, stdev=75.32, samples=9 00:37:52.875 lat (usec) : 1000=0.01% 00:37:52.875 lat (msec) : 2=2.20%, 4=96.13%, 10=1.66% 00:37:52.875 cpu : usr=96.16%, sys=3.46%, ctx=7, majf=0, minf=9 00:37:52.875 IO depths : 1=0.6%, 2=7.0%, 4=63.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 issued rwts: total=13756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.875 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:52.875 filename0: (groupid=0, jobs=1): err= 0: pid=771875: Mon Dec 9 05:32:35 2024 00:37:52.875 read: IOPS=2580, BW=20.2MiB/s (21.1MB/s)(101MiB/5001msec) 00:37:52.875 slat (nsec): min=5826, max=62710, avg=11395.38, stdev=6781.49 00:37:52.875 clat (usec): min=712, max=5711, avg=3067.34, stdev=463.48 00:37:52.875 lat (usec): min=723, max=5734, avg=3078.74, stdev=463.35 00:37:52.875 clat percentiles (usec): 00:37:52.875 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:37:52.875 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:37:52.875 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3916], 00:37:52.875 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5342], 00:37:52.875 | 99.99th=[ 5538] 00:37:52.875 bw ( KiB/s): min=20416, max=20880, per=24.13%, avg=20637.44, stdev=164.94, samples=9 00:37:52.875 iops : min= 2552, max= 2610, avg=2579.67, stdev=20.63, samples=9 00:37:52.875 lat (usec) : 750=0.04%, 1000=0.04% 00:37:52.875 lat (msec) : 2=0.55%, 4=94.78%, 10=4.59% 00:37:52.875 cpu : usr=95.54%, sys=4.12%, ctx=7, majf=0, minf=9 00:37:52.875 IO depths : 1=0.1%, 2=3.2%, 4=68.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 issued rwts: total=12905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.875 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:52.875 filename1: (groupid=0, jobs=1): err= 0: pid=771876: Mon Dec 9 05:32:35 2024 00:37:52.875 read: IOPS=2772, BW=21.7MiB/s (22.7MB/s)(108MiB/5002msec) 00:37:52.875 slat (nsec): min=5893, max=69859, avg=11934.30, stdev=7070.55 00:37:52.875 clat (usec): min=695, max=5528, avg=2848.81, stdev=429.41 00:37:52.875 lat (usec): min=710, max=5550, avg=2860.75, stdev=429.64 00:37:52.875 clat percentiles (usec): 00:37:52.875 | 1.00th=[ 1745], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2507], 00:37:52.875 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:37:52.875 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3556], 00:37:52.875 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 4883], 99.95th=[ 5080], 00:37:52.875 | 99.99th=[ 5538] 00:37:52.875 bw ( KiB/s): min=21488, max=22880, per=25.94%, avg=22188.44, stdev=575.04, samples=9 00:37:52.875 iops : min= 2686, max= 2860, avg=2773.56, stdev=71.88, samples=9 00:37:52.875 lat (usec) : 750=0.01% 00:37:52.875 lat (msec) : 2=2.37%, 4=96.17%, 10=1.45% 00:37:52.875 cpu : usr=95.06%, sys=4.62%, ctx=8, majf=0, minf=9 00:37:52.875 IO depths : 1=0.5%, 2=7.3%, 4=63.0%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 issued rwts: total=13870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.875 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:52.875 filename1: (groupid=0, jobs=1): err= 0: pid=771877: Mon Dec 9 05:32:35 2024 00:37:52.875 read: IOPS=2587, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:37:52.875 slat (nsec): min=5869, max=66365, avg=11151.86, stdev=6763.17 00:37:52.875 clat (usec): min=1040, max=5192, avg=3058.35, stdev=409.76 00:37:52.875 lat (usec): min=1055, max=5201, avg=3069.50, stdev=409.43 00:37:52.875 clat percentiles (usec): 00:37:52.875 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2868], 00:37:52.875 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3064], 00:37:52.875 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3785], 00:37:52.875 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[ 5145], 00:37:52.875 | 99.99th=[ 5211] 00:37:52.875 bw ( KiB/s): min=20160, max=21280, per=24.18%, avg=20677.33, stdev=380.99, samples=9 00:37:52.875 iops : min= 2520, max= 2660, avg=2584.67, stdev=47.62, samples=9 00:37:52.875 lat (msec) : 2=0.90%, 4=96.42%, 10=2.67% 00:37:52.875 cpu : usr=95.10%, sys=4.58%, ctx=12, majf=0, minf=9 00:37:52.875 IO depths : 1=0.2%, 2=2.6%, 4=70.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.875 issued rwts: total=12942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.875 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:52.875 00:37:52.875 Run status group 0 (all jobs): 00:37:52.875 READ: bw=83.5MiB/s (87.6MB/s), 20.2MiB/s-21.7MiB/s (21.1MB/s-22.7MB/s), io=418MiB (438MB), run=5001-5002msec 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.134 00:37:53.134 real 0m24.531s 00:37:53.134 user 4m57.639s 00:37:53.134 sys 0m7.076s 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:53.134 ************************************ 00:37:53.134 END TEST fio_dif_rand_params 00:37:53.134 ************************************ 00:37:53.134 05:32:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:53.134 05:32:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:53.134 05:32:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.134 05:32:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.393 ************************************ 00:37:53.393 START TEST fio_dif_digest 00:37:53.393 ************************************ 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.393 bdev_null0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.393 [2024-12-09 05:32:35.652597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:53.393 { 00:37:53.393 "params": { 00:37:53.393 "name": "Nvme$subsystem", 00:37:53.393 "trtype": "$TEST_TRANSPORT", 00:37:53.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:53.393 "adrfam": "ipv4", 00:37:53.393 "trsvcid": "$NVMF_PORT", 00:37:53.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:53.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:53.393 "hdgst": ${hdgst:-false}, 00:37:53.393 "ddgst": ${ddgst:-false} 00:37:53.393 }, 00:37:53.393 "method": "bdev_nvme_attach_controller" 00:37:53.393 } 00:37:53.393 EOF 00:37:53.393 )") 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:53.393 "params": { 00:37:53.393 "name": "Nvme0", 00:37:53.393 "trtype": "tcp", 00:37:53.393 "traddr": "10.0.0.2", 00:37:53.393 "adrfam": "ipv4", 00:37:53.393 "trsvcid": "4420", 00:37:53.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:53.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:53.393 "hdgst": true, 00:37:53.393 "ddgst": true 00:37:53.393 }, 00:37:53.393 "method": "bdev_nvme_attach_controller" 00:37:53.393 }' 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:53.393 05:32:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:53.651 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:53.651 ... 00:37:53.651 fio-3.35 00:37:53.651 Starting 3 threads 00:38:05.868 00:38:05.868 filename0: (groupid=0, jobs=1): err= 0: pid=773070: Mon Dec 9 05:32:46 2024 00:38:05.868 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(367MiB/10045msec) 00:38:05.868 slat (nsec): min=6242, max=33100, avg=11249.38, stdev=1970.99 00:38:05.868 clat (usec): min=7792, max=52776, avg=10236.42, stdev=1246.15 00:38:05.868 lat (usec): min=7804, max=52789, avg=10247.67, stdev=1246.15 00:38:05.868 clat percentiles (usec): 00:38:05.868 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:38:05.868 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:38:05.868 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:38:05.868 | 99.00th=[11994], 99.50th=[12125], 99.90th=[13042], 99.95th=[46400], 00:38:05.868 | 99.99th=[52691] 00:38:05.868 bw ( KiB/s): min=36608, max=38656, per=35.09%, avg=37551.50, stdev=518.16, samples=20 00:38:05.868 iops : min= 286, max= 302, avg=293.35, stdev= 4.08, samples=20 00:38:05.868 lat (msec) : 10=37.81%, 20=62.13%, 50=0.03%, 100=0.03% 00:38:05.868 cpu : usr=92.04%, sys=7.68%, ctx=15, majf=0, minf=23 00:38:05.868 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:05.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.868 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:05.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:05.868 filename0: (groupid=0, jobs=1): err= 0: pid=773071: Mon Dec 9 05:32:46 2024 00:38:05.868 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(348MiB/10043msec) 00:38:05.868 slat (nsec): min=6157, max=25733, avg=11108.84, stdev=1902.06 00:38:05.868 clat (usec): min=7329, max=46136, avg=10810.75, stdev=1180.24 00:38:05.868 lat (usec): min=7343, max=46144, avg=10821.85, stdev=1180.16 00:38:05.868 clat percentiles (usec): 00:38:05.868 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:38:05.868 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:38:05.868 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:38:05.868 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13829], 99.95th=[44827], 00:38:05.868 | 99.99th=[45876] 00:38:05.868 bw ( KiB/s): min=34816, max=36608, per=33.22%, avg=35558.40, stdev=454.17, samples=20 00:38:05.868 iops : min= 272, max= 286, avg=277.80, stdev= 3.55, samples=20 00:38:05.868 lat (msec) : 10=13.85%, 20=86.08%, 50=0.07% 00:38:05.868 cpu : usr=91.71%, sys=7.99%, ctx=16, majf=0, minf=18 00:38:05.868 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:05.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.868 issued rwts: total=2780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:05.868 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:05.868 filename0: (groupid=0, jobs=1): err= 0: pid=773072: Mon Dec 9 05:32:46 2024 00:38:05.868 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(335MiB/10044msec) 00:38:05.868 slat (usec): min=6, max=112, avg=11.63, stdev= 2.70 00:38:05.868 clat (usec): min=8857, max=51053, avg=11201.42, stdev=1227.19 00:38:05.868 lat (usec): min=8868, max=51061, avg=11213.05, stdev=1227.19 00:38:05.868 clat percentiles (usec): 00:38:05.868 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:38:05.868 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:38:05.869 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:38:05.869 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13829], 99.95th=[44827], 00:38:05.869 | 99.99th=[51119] 00:38:05.869 bw ( KiB/s): min=33792, max=35072, per=32.06%, avg=34316.80, stdev=347.21, samples=20 00:38:05.869 iops : min= 264, max= 274, avg=268.10, stdev= 2.71, samples=20 00:38:05.869 lat (msec) : 10=4.17%, 20=95.75%, 50=0.04%, 100=0.04% 00:38:05.869 cpu : usr=91.87%, sys=7.83%, ctx=15, majf=0, minf=27 00:38:05.869 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:05.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:05.869 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:05.869 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:05.869 00:38:05.869 Run status group 0 (all jobs): 00:38:05.869 READ: bw=105MiB/s (110MB/s), 33.4MiB/s-36.5MiB/s (35.0MB/s-38.3MB/s), io=1050MiB (1101MB), run=10043-10045msec 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.869 05:32:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:05.869 05:32:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.869 00:38:05.869 real 0m11.384s 00:38:05.869 user 0m36.911s 00:38:05.869 sys 0m2.733s 00:38:05.869 05:32:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:05.869 05:32:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:05.869 ************************************ 00:38:05.869 END TEST fio_dif_digest 00:38:05.869 ************************************ 00:38:05.869 05:32:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:05.869 05:32:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:05.869 rmmod nvme_tcp 00:38:05.869 rmmod nvme_fabrics 00:38:05.869 rmmod nvme_keyring 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 764179 ']' 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 764179 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 764179 ']' 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 764179 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 764179 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 764179' 00:38:05.869 killing process with pid 764179 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@973 -- # kill 764179 00:38:05.869 05:32:47 nvmf_dif -- common/autotest_common.sh@978 -- # wait 764179 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:05.869 05:32:47 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:08.408 Waiting for block devices as requested 00:38:08.408 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:08.408 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:08.668 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:08.668 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:08.668 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:08.928 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:08.928 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:08.928 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:09.188 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:09.188 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:09.188 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:09.448 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:09.448 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:09.448 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:09.707 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:09.707 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:09.707 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:09.966 05:32:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.966 05:32:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:09.966 05:32:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.506 05:32:54 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:12.506 00:38:12.506 real 1m18.450s 00:38:12.506 user 7m23.373s 00:38:12.506 sys 0m28.439s 00:38:12.506 05:32:54 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.506 05:32:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:12.506 ************************************ 00:38:12.506 END TEST nvmf_dif 00:38:12.506 ************************************ 00:38:12.506 05:32:54 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:12.506 05:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:12.506 05:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.506 05:32:54 -- common/autotest_common.sh@10 -- # set +x 00:38:12.506 ************************************ 00:38:12.506 START TEST nvmf_abort_qd_sizes 00:38:12.506 ************************************ 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:12.506 * Looking for test storage... 00:38:12.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:12.506 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:12.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.507 --rc genhtml_branch_coverage=1 00:38:12.507 --rc genhtml_function_coverage=1 00:38:12.507 --rc genhtml_legend=1 00:38:12.507 --rc geninfo_all_blocks=1 00:38:12.507 --rc geninfo_unexecuted_blocks=1 00:38:12.507 00:38:12.507 ' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:12.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.507 --rc genhtml_branch_coverage=1 00:38:12.507 --rc genhtml_function_coverage=1 00:38:12.507 --rc genhtml_legend=1 00:38:12.507 --rc geninfo_all_blocks=1 00:38:12.507 --rc geninfo_unexecuted_blocks=1 00:38:12.507 00:38:12.507 ' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:12.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.507 --rc genhtml_branch_coverage=1 00:38:12.507 --rc genhtml_function_coverage=1 00:38:12.507 --rc genhtml_legend=1 00:38:12.507 --rc geninfo_all_blocks=1 00:38:12.507 --rc geninfo_unexecuted_blocks=1 00:38:12.507 00:38:12.507 ' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:12.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:12.507 --rc genhtml_branch_coverage=1 00:38:12.507 --rc genhtml_function_coverage=1 00:38:12.507 --rc genhtml_legend=1 00:38:12.507 --rc geninfo_all_blocks=1 00:38:12.507 --rc geninfo_unexecuted_blocks=1 00:38:12.507 00:38:12.507 ' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:12.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:12.507 05:32:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:20.651 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:20.652 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:20.652 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:20.652 Found net devices under 0000:af:00.0: cvl_0_0 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:20.652 Found net devices under 0000:af:00.1: cvl_0_1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:20.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:20.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:38:20.652 00:38:20.652 --- 10.0.0.2 ping statistics --- 00:38:20.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.652 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:20.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:20.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:38:20.652 00:38:20.652 --- 10.0.0.1 ping statistics --- 00:38:20.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:20.652 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:20.652 05:33:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:23.188 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:23.188 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:24.565 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:24.565 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=782193 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 782193 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 782193 ']' 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.824 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:24.824 [2024-12-09 05:33:07.118696] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:38:24.824 [2024-12-09 05:33:07.118745] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.824 [2024-12-09 05:33:07.216671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:24.824 [2024-12-09 05:33:07.260071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.824 [2024-12-09 05:33:07.260107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.824 [2024-12-09 05:33:07.260117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.824 [2024-12-09 05:33:07.260125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.825 [2024-12-09 05:33:07.260132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.825 [2024-12-09 05:33:07.261889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.825 [2024-12-09 05:33:07.262007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:24.825 [2024-12-09 05:33:07.262041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.825 [2024-12-09 05:33:07.262039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:25.763 05:33:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:d8:00.0 ]] 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:d8:00.0 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.763 05:33:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:25.763 ************************************ 00:38:25.763 START TEST spdk_target_abort 00:38:25.763 ************************************ 00:38:25.763 05:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:25.763 05:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:25.763 05:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:38:25.763 05:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.763 05:33:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 spdk_targetn1 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 [2024-12-09 05:33:10.904073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.055 [2024-12-09 05:33:10.961093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.055 05:33:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:32.473 Initializing NVMe Controllers 00:38:32.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:32.473 Initialization complete. Launching workers. 00:38:32.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17121, failed: 0 00:38:32.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1419, failed to submit 15702 00:38:32.473 success 738, unsuccessful 681, failed 0 00:38:32.473 05:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:32.473 05:33:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:35.858 Initializing NVMe Controllers 00:38:35.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:35.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:35.858 Initialization complete. Launching workers. 00:38:35.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8571, failed: 0 00:38:35.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7298 00:38:35.858 success 276, unsuccessful 997, failed 0 00:38:35.858 05:33:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:35.858 05:33:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:39.165 Initializing NVMe Controllers 00:38:39.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:39.165 Initialization complete. Launching workers. 00:38:39.165 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39085, failed: 0 00:38:39.165 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2905, failed to submit 36180 00:38:39.165 success 603, unsuccessful 2302, failed 0 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.165 05:33:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 782193 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 782193 ']' 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 782193 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782193 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782193' 00:38:40.546 killing process with pid 782193 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 782193 00:38:40.546 05:33:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 782193 00:38:40.806 00:38:40.806 real 0m14.957s 00:38:40.806 user 0m59.129s 00:38:40.806 sys 0m2.985s 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:40.806 ************************************ 00:38:40.806 END TEST spdk_target_abort 00:38:40.806 ************************************ 00:38:40.806 05:33:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:40.806 05:33:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:40.806 05:33:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:40.806 05:33:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:40.806 ************************************ 00:38:40.806 START TEST kernel_target_abort 00:38:40.806 ************************************ 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:40.806 05:33:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:44.099 Waiting for block devices as requested 00:38:44.099 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:44.099 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:44.358 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:44.358 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:44.358 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:44.618 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:44.619 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:44.879 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:44.879 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:44.879 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:45.138 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:45.138 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:45.138 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:45.397 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:45.397 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:45.397 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:45.656 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:45.656 No valid GPT data, bailing 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:45.656 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:45.915 00:38:45.915 Discovery Log Number of Records 2, Generation counter 2 00:38:45.915 =====Discovery Log Entry 0====== 00:38:45.915 trtype: tcp 00:38:45.915 adrfam: ipv4 00:38:45.915 subtype: current discovery subsystem 00:38:45.915 treq: not specified, sq flow control disable supported 00:38:45.915 portid: 1 00:38:45.915 trsvcid: 4420 00:38:45.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:45.915 traddr: 10.0.0.1 00:38:45.915 eflags: none 00:38:45.915 sectype: none 00:38:45.915 =====Discovery Log Entry 1====== 00:38:45.915 trtype: tcp 00:38:45.915 adrfam: ipv4 00:38:45.915 subtype: nvme subsystem 00:38:45.915 treq: not specified, sq flow control disable supported 00:38:45.915 portid: 1 00:38:45.915 trsvcid: 4420 00:38:45.915 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:45.915 traddr: 10.0.0.1 00:38:45.915 eflags: none 00:38:45.915 sectype: none 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:45.915 05:33:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:49.200 Initializing NVMe Controllers 00:38:49.200 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:49.200 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:49.200 Initialization complete. Launching workers. 00:38:49.200 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87628, failed: 0 00:38:49.200 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 87628, failed to submit 0 00:38:49.200 success 0, unsuccessful 87628, failed 0 00:38:49.200 05:33:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:49.200 05:33:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:52.489 Initializing NVMe Controllers 00:38:52.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:52.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:52.489 Initialization complete. Launching workers. 00:38:52.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142670, failed: 0 00:38:52.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35578, failed to submit 107092 00:38:52.489 success 0, unsuccessful 35578, failed 0 00:38:52.489 05:33:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:52.489 05:33:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:55.774 Initializing NVMe Controllers 00:38:55.774 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:55.774 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:55.774 Initialization complete. Launching workers. 00:38:55.774 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 135949, failed: 0 00:38:55.774 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34042, failed to submit 101907 00:38:55.774 success 0, unsuccessful 34042, failed 0 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:55.774 05:33:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:59.062 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:59.062 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:00.441 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:39:00.701 00:39:00.701 real 0m19.866s 00:39:00.701 user 0m9.248s 00:39:00.701 sys 0m6.117s 00:39:00.701 05:33:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.701 05:33:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:00.701 ************************************ 00:39:00.701 END TEST kernel_target_abort 00:39:00.701 ************************************ 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:00.701 rmmod nvme_tcp 00:39:00.701 rmmod nvme_fabrics 00:39:00.701 rmmod nvme_keyring 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 782193 ']' 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 782193 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 782193 ']' 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 782193 00:39:00.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (782193) - No such process 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 782193 is not found' 00:39:00.701 Process with pid 782193 is not found 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:00.701 05:33:43 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:03.994 Waiting for block devices as requested 00:39:03.994 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:04.254 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:04.254 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:04.254 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:04.513 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:04.513 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:04.513 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:04.513 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:04.773 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:04.773 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:04.773 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:05.032 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:05.032 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:05.032 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:05.292 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:05.292 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:05.292 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:05.552 05:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.089 05:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:08.089 00:39:08.089 real 0m55.571s 00:39:08.089 user 1m13.458s 00:39:08.089 sys 0m20.080s 00:39:08.089 05:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.089 05:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:08.089 ************************************ 00:39:08.089 END TEST nvmf_abort_qd_sizes 00:39:08.089 ************************************ 00:39:08.089 05:33:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:08.089 05:33:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:08.089 05:33:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.089 05:33:50 -- common/autotest_common.sh@10 -- # set +x 00:39:08.089 ************************************ 00:39:08.089 START TEST keyring_file 00:39:08.089 ************************************ 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:08.089 * Looking for test storage... 00:39:08.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:08.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.089 --rc genhtml_branch_coverage=1 00:39:08.089 --rc genhtml_function_coverage=1 00:39:08.089 --rc genhtml_legend=1 00:39:08.089 --rc geninfo_all_blocks=1 00:39:08.089 --rc geninfo_unexecuted_blocks=1 00:39:08.089 00:39:08.089 ' 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:08.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.089 --rc genhtml_branch_coverage=1 00:39:08.089 --rc genhtml_function_coverage=1 00:39:08.089 --rc genhtml_legend=1 00:39:08.089 --rc geninfo_all_blocks=1 00:39:08.089 --rc geninfo_unexecuted_blocks=1 00:39:08.089 00:39:08.089 ' 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:08.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.089 --rc genhtml_branch_coverage=1 00:39:08.089 --rc genhtml_function_coverage=1 00:39:08.089 --rc genhtml_legend=1 00:39:08.089 --rc geninfo_all_blocks=1 00:39:08.089 --rc geninfo_unexecuted_blocks=1 00:39:08.089 00:39:08.089 ' 00:39:08.089 05:33:50 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:08.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:08.089 --rc genhtml_branch_coverage=1 00:39:08.089 --rc genhtml_function_coverage=1 00:39:08.089 --rc genhtml_legend=1 00:39:08.089 --rc geninfo_all_blocks=1 00:39:08.089 --rc geninfo_unexecuted_blocks=1 00:39:08.089 00:39:08.089 ' 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.089 05:33:50 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.089 05:33:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.089 05:33:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.089 05:33:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.089 05:33:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:08.089 05:33:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:08.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AYvPt3y9um 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:08.089 05:33:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AYvPt3y9um 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AYvPt3y9um 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AYvPt3y9um 00:39:08.089 05:33:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:08.089 05:33:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ENEkjZTGG4 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:08.090 05:33:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ENEkjZTGG4 00:39:08.090 05:33:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ENEkjZTGG4 00:39:08.090 05:33:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ENEkjZTGG4 00:39:08.090 05:33:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=791674 00:39:08.090 05:33:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:08.090 05:33:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 791674 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 791674 ']' 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.090 05:33:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.090 [2024-12-09 05:33:50.495989] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:39:08.090 [2024-12-09 05:33:50.496046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791674 ] 00:39:08.348 [2024-12-09 05:33:50.589922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.348 [2024-12-09 05:33:50.631751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.914 05:33:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.914 05:33:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:08.914 05:33:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:08.914 05:33:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.914 05:33:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.914 [2024-12-09 05:33:51.330276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.914 null0 00:39:08.914 [2024-12-09 05:33:51.362333] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:08.914 [2024-12-09 05:33:51.362739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.172 05:33:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.172 05:33:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:09.172 [2024-12-09 05:33:51.394406] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:09.172 request: 00:39:09.172 { 00:39:09.172 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.173 "secure_channel": false, 00:39:09.173 "listen_address": { 00:39:09.173 "trtype": "tcp", 00:39:09.173 "traddr": "127.0.0.1", 00:39:09.173 "trsvcid": "4420" 00:39:09.173 }, 00:39:09.173 "method": "nvmf_subsystem_add_listener", 00:39:09.173 "req_id": 1 00:39:09.173 } 00:39:09.173 Got JSON-RPC error response 00:39:09.173 response: 00:39:09.173 { 00:39:09.173 "code": -32602, 00:39:09.173 "message": "Invalid parameters" 00:39:09.173 } 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:09.173 05:33:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=791754 00:39:09.173 05:33:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 791754 /var/tmp/bperf.sock 00:39:09.173 05:33:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 791754 ']' 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:09.173 05:33:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:09.173 [2024-12-09 05:33:51.450766] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:39:09.173 [2024-12-09 05:33:51.450814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791754 ] 00:39:09.173 [2024-12-09 05:33:51.541026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.173 [2024-12-09 05:33:51.580588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.105 05:33:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:10.105 05:33:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:10.105 05:33:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:10.105 05:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:10.105 05:33:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ENEkjZTGG4 00:39:10.105 05:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ENEkjZTGG4 00:39:10.364 05:33:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:10.364 05:33:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:10.364 05:33:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.364 05:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.364 05:33:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.624 05:33:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AYvPt3y9um == \/\t\m\p\/\t\m\p\.\A\Y\v\P\t\3\y\9\u\m ]] 00:39:10.624 05:33:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:10.624 05:33:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:10.624 05:33:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.624 05:33:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.624 05:33:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.624 05:33:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ENEkjZTGG4 == \/\t\m\p\/\t\m\p\.\E\N\E\k\j\Z\T\G\G\4 ]] 00:39:10.625 05:33:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:10.625 05:33:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.625 05:33:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.625 05:33:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.625 05:33:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.625 05:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.885 05:33:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:10.885 05:33:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:10.885 05:33:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:10.885 05:33:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.885 05:33:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.885 05:33:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.885 05:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.144 05:33:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:11.144 05:33:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.144 05:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:11.403 [2024-12-09 05:33:53.637135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:11.403 nvme0n1 00:39:11.403 05:33:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:11.403 05:33:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:11.403 05:33:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.403 05:33:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.403 05:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.403 05:33:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:11.662 05:33:53 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:11.662 05:33:53 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:11.662 05:33:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:11.662 05:33:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:11.662 05:33:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:11.662 05:33:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:11.662 05:33:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:11.921 05:33:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:11.921 05:33:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:11.921 Running I/O for 1 seconds... 00:39:12.860 18466.00 IOPS, 72.13 MiB/s 00:39:12.860 Latency(us) 00:39:12.860 [2024-12-09T04:33:55.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:12.860 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:12.860 nvme0n1 : 1.00 18509.01 72.30 0.00 0.00 6902.04 4456.45 12268.34 00:39:12.860 [2024-12-09T04:33:55.330Z] =================================================================================================================== 00:39:12.860 [2024-12-09T04:33:55.330Z] Total : 18509.01 72.30 0.00 0.00 6902.04 4456.45 12268.34 00:39:12.860 { 00:39:12.860 "results": [ 00:39:12.860 { 00:39:12.860 "job": "nvme0n1", 00:39:12.860 "core_mask": "0x2", 00:39:12.860 "workload": "randrw", 00:39:12.860 "percentage": 50, 00:39:12.860 "status": "finished", 00:39:12.860 "queue_depth": 128, 00:39:12.860 "io_size": 4096, 00:39:12.860 "runtime": 1.004592, 00:39:12.860 "iops": 18509.006641502223, 00:39:12.860 "mibps": 72.30080719336806, 00:39:12.860 "io_failed": 0, 00:39:12.860 "io_timeout": 0, 00:39:12.860 "avg_latency_us": 6902.040468280091, 00:39:12.860 "min_latency_us": 4456.448, 00:39:12.860 "max_latency_us": 12268.3392 00:39:12.860 } 00:39:12.860 ], 00:39:12.860 "core_count": 1 00:39:12.860 } 00:39:12.860 05:33:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:12.860 05:33:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:13.117 05:33:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:13.117 05:33:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.117 05:33:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.118 05:33:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.118 05:33:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.118 05:33:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.376 05:33:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:13.376 05:33:55 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:13.376 05:33:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:13.376 05:33:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.376 05:33:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.376 05:33:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:13.376 05:33:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.635 05:33:55 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:13.635 05:33:55 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:13.635 05:33:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.635 05:33:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:13.635 [2024-12-09 05:33:56.026169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:13.635 [2024-12-09 05:33:56.026804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf7e0 (107): Transport endpoint is not connected 00:39:13.635 [2024-12-09 05:33:56.027799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf7e0 (9): Bad file descriptor 00:39:13.635 [2024-12-09 05:33:56.028800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:13.635 [2024-12-09 05:33:56.028819] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:13.635 [2024-12-09 05:33:56.028828] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:13.635 [2024-12-09 05:33:56.028838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:13.635 request: 00:39:13.635 { 00:39:13.635 "name": "nvme0", 00:39:13.635 "trtype": "tcp", 00:39:13.635 "traddr": "127.0.0.1", 00:39:13.635 "adrfam": "ipv4", 00:39:13.635 "trsvcid": "4420", 00:39:13.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.635 "prchk_reftag": false, 00:39:13.635 "prchk_guard": false, 00:39:13.635 "hdgst": false, 00:39:13.635 "ddgst": false, 00:39:13.635 "psk": "key1", 00:39:13.635 "allow_unrecognized_csi": false, 00:39:13.635 "method": "bdev_nvme_attach_controller", 00:39:13.635 "req_id": 1 00:39:13.635 } 00:39:13.635 Got JSON-RPC error response 00:39:13.635 response: 00:39:13.635 { 00:39:13.635 "code": -5, 00:39:13.635 "message": "Input/output error" 00:39:13.635 } 00:39:13.635 05:33:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:13.635 05:33:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:13.635 05:33:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:13.635 05:33:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:13.635 05:33:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:13.635 05:33:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:13.635 05:33:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.635 05:33:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.635 05:33:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.635 05:33:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:13.895 05:33:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:13.895 05:33:56 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:13.895 05:33:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:13.895 05:33:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:13.895 05:33:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:13.895 05:33:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:13.895 05:33:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.154 05:33:56 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:14.154 05:33:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:14.154 05:33:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:14.414 05:33:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:14.414 05:33:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:14.414 05:33:56 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:14.414 05:33:56 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:14.414 05:33:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.679 05:33:57 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:14.679 05:33:57 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AYvPt3y9um 00:39:14.679 05:33:57 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:14.679 05:33:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:14.679 05:33:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:14.939 [2024-12-09 05:33:57.245001] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AYvPt3y9um': 0100660 00:39:14.939 [2024-12-09 05:33:57.245029] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:14.939 request: 00:39:14.939 { 00:39:14.939 "name": "key0", 00:39:14.939 "path": "/tmp/tmp.AYvPt3y9um", 00:39:14.939 "method": "keyring_file_add_key", 00:39:14.939 "req_id": 1 00:39:14.939 } 00:39:14.939 Got JSON-RPC error response 00:39:14.939 response: 00:39:14.939 { 00:39:14.939 "code": -1, 00:39:14.939 "message": "Operation not permitted" 00:39:14.939 } 00:39:14.939 05:33:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:14.939 05:33:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:14.939 05:33:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:14.939 05:33:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:14.939 05:33:57 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AYvPt3y9um 00:39:14.939 05:33:57 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:14.939 05:33:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AYvPt3y9um 00:39:15.199 05:33:57 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AYvPt3y9um 00:39:15.199 05:33:57 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:15.199 05:33:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:15.199 05:33:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:15.199 05:33:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:15.199 05:33:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:15.199 05:33:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:15.459 05:33:57 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:15.459 05:33:57 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.459 05:33:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.459 [2024-12-09 05:33:57.846589] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AYvPt3y9um': No such file or directory 00:39:15.459 [2024-12-09 05:33:57.846615] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:15.459 [2024-12-09 05:33:57.846633] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:15.459 [2024-12-09 05:33:57.846642] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:15.459 [2024-12-09 05:33:57.846652] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:15.459 [2024-12-09 05:33:57.846660] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:15.459 request: 00:39:15.459 { 00:39:15.459 "name": "nvme0", 00:39:15.459 "trtype": "tcp", 00:39:15.459 "traddr": "127.0.0.1", 00:39:15.459 "adrfam": "ipv4", 00:39:15.459 "trsvcid": "4420", 00:39:15.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.459 "prchk_reftag": false, 00:39:15.459 "prchk_guard": false, 00:39:15.459 "hdgst": false, 00:39:15.459 "ddgst": false, 00:39:15.459 "psk": "key0", 00:39:15.459 "allow_unrecognized_csi": false, 00:39:15.459 "method": "bdev_nvme_attach_controller", 00:39:15.459 "req_id": 1 00:39:15.459 } 00:39:15.459 Got JSON-RPC error response 00:39:15.459 response: 00:39:15.459 { 00:39:15.459 "code": -19, 00:39:15.459 "message": "No such device" 00:39:15.459 } 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:15.459 05:33:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:15.459 05:33:57 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:15.459 05:33:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:15.719 05:33:58 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LWDOdgb6Yy 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:15.719 05:33:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LWDOdgb6Yy 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LWDOdgb6Yy 00:39:15.719 05:33:58 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LWDOdgb6Yy 00:39:15.719 05:33:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LWDOdgb6Yy 00:39:15.719 05:33:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LWDOdgb6Yy 00:39:15.979 05:33:58 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:15.979 05:33:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:16.240 nvme0n1 00:39:16.240 05:33:58 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:16.240 05:33:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:16.240 05:33:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:16.240 05:33:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.240 05:33:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:16.240 05:33:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.499 05:33:58 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:16.499 05:33:58 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:16.500 05:33:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:16.500 05:33:58 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:16.500 05:33:58 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:16.500 05:33:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.500 05:33:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:16.500 05:33:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.759 05:33:59 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:16.759 05:33:59 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:16.759 05:33:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:16.759 05:33:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:16.759 05:33:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:16.759 05:33:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:16.759 05:33:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:17.019 05:33:59 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:17.019 05:33:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:17.019 05:33:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:17.278 05:33:59 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:17.278 05:33:59 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:17.278 05:33:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:17.278 05:33:59 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:17.278 05:33:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LWDOdgb6Yy 00:39:17.278 05:33:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LWDOdgb6Yy 00:39:17.537 05:33:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ENEkjZTGG4 00:39:17.537 05:33:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ENEkjZTGG4 00:39:17.797 05:34:00 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:17.797 05:34:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:18.056 nvme0n1 00:39:18.056 05:34:00 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:18.056 05:34:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:18.317 05:34:00 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:18.317 "subsystems": [ 00:39:18.317 { 00:39:18.317 "subsystem": "keyring", 00:39:18.317 "config": [ 00:39:18.317 { 00:39:18.317 "method": "keyring_file_add_key", 00:39:18.317 "params": { 00:39:18.317 "name": "key0", 00:39:18.317 "path": "/tmp/tmp.LWDOdgb6Yy" 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "keyring_file_add_key", 00:39:18.317 "params": { 00:39:18.317 "name": "key1", 00:39:18.317 "path": "/tmp/tmp.ENEkjZTGG4" 00:39:18.317 } 00:39:18.317 } 00:39:18.317 ] 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "subsystem": "iobuf", 00:39:18.317 "config": [ 00:39:18.317 { 00:39:18.317 "method": "iobuf_set_options", 00:39:18.317 "params": { 00:39:18.317 "small_pool_count": 8192, 00:39:18.317 "large_pool_count": 1024, 00:39:18.317 "small_bufsize": 8192, 00:39:18.317 "large_bufsize": 135168, 00:39:18.317 "enable_numa": false 00:39:18.317 } 00:39:18.317 } 00:39:18.317 ] 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "subsystem": "sock", 00:39:18.317 "config": [ 00:39:18.317 { 00:39:18.317 "method": "sock_set_default_impl", 00:39:18.317 "params": { 00:39:18.317 "impl_name": "posix" 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "sock_impl_set_options", 00:39:18.317 "params": { 00:39:18.317 "impl_name": "ssl", 00:39:18.317 "recv_buf_size": 4096, 00:39:18.317 "send_buf_size": 4096, 00:39:18.317 "enable_recv_pipe": true, 00:39:18.317 "enable_quickack": false, 00:39:18.317 "enable_placement_id": 0, 00:39:18.317 "enable_zerocopy_send_server": true, 00:39:18.317 "enable_zerocopy_send_client": false, 00:39:18.317 "zerocopy_threshold": 0, 00:39:18.317 "tls_version": 0, 00:39:18.317 "enable_ktls": false 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "sock_impl_set_options", 00:39:18.317 "params": { 00:39:18.317 "impl_name": "posix", 00:39:18.317 "recv_buf_size": 2097152, 00:39:18.317 "send_buf_size": 2097152, 00:39:18.317 "enable_recv_pipe": true, 00:39:18.317 "enable_quickack": false, 00:39:18.317 "enable_placement_id": 0, 00:39:18.317 "enable_zerocopy_send_server": true, 00:39:18.317 "enable_zerocopy_send_client": false, 00:39:18.317 "zerocopy_threshold": 0, 00:39:18.317 "tls_version": 0, 00:39:18.317 "enable_ktls": false 00:39:18.317 } 00:39:18.317 } 00:39:18.317 ] 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "subsystem": "vmd", 00:39:18.317 "config": [] 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "subsystem": "accel", 00:39:18.317 "config": [ 00:39:18.317 { 00:39:18.317 "method": "accel_set_options", 00:39:18.317 "params": { 00:39:18.317 "small_cache_size": 128, 00:39:18.317 "large_cache_size": 16, 00:39:18.317 "task_count": 2048, 00:39:18.317 "sequence_count": 2048, 00:39:18.317 "buf_count": 2048 00:39:18.317 } 00:39:18.317 } 00:39:18.317 ] 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "subsystem": "bdev", 00:39:18.317 "config": [ 00:39:18.317 { 00:39:18.317 "method": "bdev_set_options", 00:39:18.317 "params": { 00:39:18.317 "bdev_io_pool_size": 65535, 00:39:18.317 "bdev_io_cache_size": 256, 00:39:18.317 "bdev_auto_examine": true, 00:39:18.317 "iobuf_small_cache_size": 128, 00:39:18.317 "iobuf_large_cache_size": 16 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "bdev_raid_set_options", 00:39:18.317 "params": { 00:39:18.317 "process_window_size_kb": 1024, 00:39:18.317 "process_max_bandwidth_mb_sec": 0 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "bdev_iscsi_set_options", 00:39:18.317 "params": { 00:39:18.317 "timeout_sec": 30 00:39:18.317 } 00:39:18.317 }, 00:39:18.317 { 00:39:18.317 "method": "bdev_nvme_set_options", 00:39:18.317 "params": { 00:39:18.317 "action_on_timeout": "none", 00:39:18.317 "timeout_us": 0, 00:39:18.317 "timeout_admin_us": 0, 00:39:18.317 "keep_alive_timeout_ms": 10000, 00:39:18.317 "arbitration_burst": 0, 00:39:18.317 "low_priority_weight": 0, 00:39:18.317 "medium_priority_weight": 0, 00:39:18.317 "high_priority_weight": 0, 00:39:18.317 "nvme_adminq_poll_period_us": 10000, 00:39:18.317 "nvme_ioq_poll_period_us": 0, 00:39:18.317 "io_queue_requests": 512, 00:39:18.317 "delay_cmd_submit": true, 00:39:18.317 "transport_retry_count": 4, 00:39:18.317 "bdev_retry_count": 3, 00:39:18.318 "transport_ack_timeout": 0, 00:39:18.318 "ctrlr_loss_timeout_sec": 0, 00:39:18.318 "reconnect_delay_sec": 0, 00:39:18.318 "fast_io_fail_timeout_sec": 0, 00:39:18.318 "disable_auto_failback": false, 00:39:18.318 "generate_uuids": false, 00:39:18.318 "transport_tos": 0, 00:39:18.318 "nvme_error_stat": false, 00:39:18.318 "rdma_srq_size": 0, 00:39:18.318 "io_path_stat": false, 00:39:18.318 "allow_accel_sequence": false, 00:39:18.318 "rdma_max_cq_size": 0, 00:39:18.318 "rdma_cm_event_timeout_ms": 0, 00:39:18.318 "dhchap_digests": [ 00:39:18.318 "sha256", 00:39:18.318 "sha384", 00:39:18.318 "sha512" 00:39:18.318 ], 00:39:18.318 "dhchap_dhgroups": [ 00:39:18.318 "null", 00:39:18.318 "ffdhe2048", 00:39:18.318 "ffdhe3072", 00:39:18.318 "ffdhe4096", 00:39:18.318 "ffdhe6144", 00:39:18.318 "ffdhe8192" 00:39:18.318 ] 00:39:18.318 } 00:39:18.318 }, 00:39:18.318 { 00:39:18.318 "method": "bdev_nvme_attach_controller", 00:39:18.318 "params": { 00:39:18.318 "name": "nvme0", 00:39:18.318 "trtype": "TCP", 00:39:18.318 "adrfam": "IPv4", 00:39:18.318 "traddr": "127.0.0.1", 00:39:18.318 "trsvcid": "4420", 00:39:18.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:18.318 "prchk_reftag": false, 00:39:18.318 "prchk_guard": false, 00:39:18.318 "ctrlr_loss_timeout_sec": 0, 00:39:18.318 "reconnect_delay_sec": 0, 00:39:18.318 "fast_io_fail_timeout_sec": 0, 00:39:18.318 "psk": "key0", 00:39:18.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:18.318 "hdgst": false, 00:39:18.318 "ddgst": false, 00:39:18.318 "multipath": "multipath" 00:39:18.318 } 00:39:18.318 }, 00:39:18.318 { 00:39:18.318 "method": "bdev_nvme_set_hotplug", 00:39:18.318 "params": { 00:39:18.318 "period_us": 100000, 00:39:18.318 "enable": false 00:39:18.318 } 00:39:18.318 }, 00:39:18.318 { 00:39:18.318 "method": "bdev_wait_for_examine" 00:39:18.318 } 00:39:18.318 ] 00:39:18.318 }, 00:39:18.318 { 00:39:18.318 "subsystem": "nbd", 00:39:18.318 "config": [] 00:39:18.318 } 00:39:18.318 ] 00:39:18.318 }' 00:39:18.318 05:34:00 keyring_file -- keyring/file.sh@115 -- # killprocess 791754 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 791754 ']' 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 791754 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791754 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791754' 00:39:18.318 killing process with pid 791754 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@973 -- # kill 791754 00:39:18.318 Received shutdown signal, test time was about 1.000000 seconds 00:39:18.318 00:39:18.318 Latency(us) 00:39:18.318 [2024-12-09T04:34:00.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.318 [2024-12-09T04:34:00.788Z] =================================================================================================================== 00:39:18.318 [2024-12-09T04:34:00.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:18.318 05:34:00 keyring_file -- common/autotest_common.sh@978 -- # wait 791754 00:39:18.578 05:34:00 keyring_file -- keyring/file.sh@118 -- # bperfpid=793422 00:39:18.578 05:34:00 keyring_file -- keyring/file.sh@120 -- # waitforlisten 793422 /var/tmp/bperf.sock 00:39:18.578 05:34:00 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 793422 ']' 00:39:18.578 05:34:00 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:18.578 05:34:00 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.578 05:34:00 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:18.578 05:34:00 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:18.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:18.578 05:34:00 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.578 05:34:00 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:18.578 "subsystems": [ 00:39:18.578 { 00:39:18.578 "subsystem": "keyring", 00:39:18.578 "config": [ 00:39:18.578 { 00:39:18.578 "method": "keyring_file_add_key", 00:39:18.578 "params": { 00:39:18.578 "name": "key0", 00:39:18.578 "path": "/tmp/tmp.LWDOdgb6Yy" 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "keyring_file_add_key", 00:39:18.578 "params": { 00:39:18.578 "name": "key1", 00:39:18.578 "path": "/tmp/tmp.ENEkjZTGG4" 00:39:18.578 } 00:39:18.578 } 00:39:18.578 ] 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "subsystem": "iobuf", 00:39:18.578 "config": [ 00:39:18.578 { 00:39:18.578 "method": "iobuf_set_options", 00:39:18.578 "params": { 00:39:18.578 "small_pool_count": 8192, 00:39:18.578 "large_pool_count": 1024, 00:39:18.578 "small_bufsize": 8192, 00:39:18.578 "large_bufsize": 135168, 00:39:18.578 "enable_numa": false 00:39:18.578 } 00:39:18.578 } 00:39:18.578 ] 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "subsystem": "sock", 00:39:18.578 "config": [ 00:39:18.578 { 00:39:18.578 "method": "sock_set_default_impl", 00:39:18.578 "params": { 00:39:18.578 "impl_name": "posix" 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "sock_impl_set_options", 00:39:18.578 "params": { 00:39:18.578 "impl_name": "ssl", 00:39:18.578 "recv_buf_size": 4096, 00:39:18.578 "send_buf_size": 4096, 00:39:18.578 "enable_recv_pipe": true, 00:39:18.578 "enable_quickack": false, 00:39:18.578 "enable_placement_id": 0, 00:39:18.578 "enable_zerocopy_send_server": true, 00:39:18.578 "enable_zerocopy_send_client": false, 00:39:18.578 "zerocopy_threshold": 0, 00:39:18.578 "tls_version": 0, 00:39:18.578 "enable_ktls": false 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "sock_impl_set_options", 00:39:18.578 "params": { 00:39:18.578 "impl_name": "posix", 00:39:18.578 "recv_buf_size": 2097152, 00:39:18.578 "send_buf_size": 2097152, 00:39:18.578 "enable_recv_pipe": true, 00:39:18.578 "enable_quickack": false, 00:39:18.578 "enable_placement_id": 0, 00:39:18.578 "enable_zerocopy_send_server": true, 00:39:18.578 "enable_zerocopy_send_client": false, 00:39:18.578 "zerocopy_threshold": 0, 00:39:18.578 "tls_version": 0, 00:39:18.578 "enable_ktls": false 00:39:18.578 } 00:39:18.578 } 00:39:18.578 ] 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "subsystem": "vmd", 00:39:18.578 "config": [] 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "subsystem": "accel", 00:39:18.578 "config": [ 00:39:18.578 { 00:39:18.578 "method": "accel_set_options", 00:39:18.578 "params": { 00:39:18.578 "small_cache_size": 128, 00:39:18.578 "large_cache_size": 16, 00:39:18.578 "task_count": 2048, 00:39:18.578 "sequence_count": 2048, 00:39:18.578 "buf_count": 2048 00:39:18.578 } 00:39:18.578 } 00:39:18.578 ] 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "subsystem": "bdev", 00:39:18.578 "config": [ 00:39:18.578 { 00:39:18.578 "method": "bdev_set_options", 00:39:18.578 "params": { 00:39:18.578 "bdev_io_pool_size": 65535, 00:39:18.578 "bdev_io_cache_size": 256, 00:39:18.578 "bdev_auto_examine": true, 00:39:18.578 "iobuf_small_cache_size": 128, 00:39:18.578 "iobuf_large_cache_size": 16 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "bdev_raid_set_options", 00:39:18.578 "params": { 00:39:18.578 "process_window_size_kb": 1024, 00:39:18.578 "process_max_bandwidth_mb_sec": 0 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "bdev_iscsi_set_options", 00:39:18.578 "params": { 00:39:18.578 "timeout_sec": 30 00:39:18.578 } 00:39:18.578 }, 00:39:18.578 { 00:39:18.578 "method": "bdev_nvme_set_options", 00:39:18.579 "params": { 00:39:18.579 "action_on_timeout": "none", 00:39:18.579 "timeout_us": 0, 00:39:18.579 "timeout_admin_us": 0, 00:39:18.579 "keep_alive_timeout_ms": 10000, 00:39:18.579 "arbitration_burst": 0, 00:39:18.579 "low_priority_weight": 0, 00:39:18.579 "medium_priority_weight": 0, 00:39:18.579 "high_priority_weight": 0, 00:39:18.579 "nvme_adminq_poll_period_us": 10000, 00:39:18.579 "nvme_ioq_poll_period_us": 0, 00:39:18.579 "io_queue_requests": 512, 00:39:18.579 "delay_cmd_submit": true, 00:39:18.579 "transport_retry_count": 4, 00:39:18.579 "bdev_retry_count": 3, 00:39:18.579 "transport_ack_timeout": 0, 00:39:18.579 "ctrlr_loss_timeout_sec": 0, 00:39:18.579 "reconnect_delay_sec": 0, 00:39:18.579 "fast_io_fail_timeout_sec": 0, 00:39:18.579 "disable_auto_failback": false, 00:39:18.579 "generate_uuids": false, 00:39:18.579 "transport_tos": 0, 00:39:18.579 "nvme_error_stat": false, 00:39:18.579 "rdma_srq_size": 0, 00:39:18.579 "io_path_stat": false, 00:39:18.579 "allow_accel_sequence": false, 00:39:18.579 "rdma_max_cq_size": 0, 00:39:18.579 "rdma_cm_event_timeout_ms": 0, 00:39:18.579 "dhchap_digests": [ 00:39:18.579 "sha256", 00:39:18.579 "sha384", 00:39:18.579 "sha512" 00:39:18.579 ], 00:39:18.579 "dhchap_dhgroups": [ 00:39:18.579 "null", 00:39:18.579 "ffdhe2048", 00:39:18.579 "ffdhe3072", 00:39:18.579 "ffdhe4096", 00:39:18.579 "ffdhe6144", 00:39:18.579 "ffdhe8192" 00:39:18.579 ] 00:39:18.579 } 00:39:18.579 }, 00:39:18.579 { 00:39:18.579 "method": "bdev_nvme_attach_controller", 00:39:18.579 "params": { 00:39:18.579 "name": "nvme0", 00:39:18.579 "trtype": "TCP", 00:39:18.579 "adrfam": "IPv4", 00:39:18.579 "traddr": "127.0.0.1", 00:39:18.579 "trsvcid": "4420", 00:39:18.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:18.579 "prchk_reftag": false, 00:39:18.579 "prchk_guard": false, 00:39:18.579 "ctrlr_loss_timeout_sec": 0, 00:39:18.579 "reconnect_delay_sec": 0, 00:39:18.579 "fast_io_fail_timeout_sec": 0, 00:39:18.579 "psk": "key0", 00:39:18.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:18.579 "hdgst": false, 00:39:18.579 "ddgst": false, 00:39:18.579 "multipath": "multipath" 00:39:18.579 } 00:39:18.579 }, 00:39:18.579 { 00:39:18.579 "method": "bdev_nvme_set_hotplug", 00:39:18.579 "params": { 00:39:18.579 "period_us": 100000, 00:39:18.579 "enable": false 00:39:18.579 } 00:39:18.579 }, 00:39:18.579 { 00:39:18.579 "method": "bdev_wait_for_examine" 00:39:18.579 } 00:39:18.579 ] 00:39:18.579 }, 00:39:18.579 { 00:39:18.579 "subsystem": "nbd", 00:39:18.579 "config": [] 00:39:18.579 } 00:39:18.579 ] 00:39:18.579 }' 00:39:18.579 05:34:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:18.579 [2024-12-09 05:34:00.937831] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:39:18.579 [2024-12-09 05:34:00.937898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793422 ] 00:39:18.579 [2024-12-09 05:34:01.031519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.838 [2024-12-09 05:34:01.073437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.838 [2024-12-09 05:34:01.235680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:19.405 05:34:01 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:19.405 05:34:01 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:19.405 05:34:01 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:19.405 05:34:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.405 05:34:01 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:19.664 05:34:01 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:19.664 05:34:01 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:19.664 05:34:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:19.664 05:34:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.664 05:34:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.664 05:34:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.664 05:34:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:19.923 05:34:02 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:19.924 05:34:02 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:19.924 05:34:02 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:19.924 05:34:02 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:19.924 05:34:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:19.924 05:34:02 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:20.183 05:34:02 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:20.183 05:34:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:20.183 05:34:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LWDOdgb6Yy /tmp/tmp.ENEkjZTGG4 00:39:20.183 05:34:02 keyring_file -- keyring/file.sh@20 -- # killprocess 793422 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 793422 ']' 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 793422 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793422 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793422' 00:39:20.183 killing process with pid 793422 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@973 -- # kill 793422 00:39:20.183 Received shutdown signal, test time was about 1.000000 seconds 00:39:20.183 00:39:20.183 Latency(us) 00:39:20.183 [2024-12-09T04:34:02.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.183 [2024-12-09T04:34:02.653Z] =================================================================================================================== 00:39:20.183 [2024-12-09T04:34:02.653Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:20.183 05:34:02 keyring_file -- common/autotest_common.sh@978 -- # wait 793422 00:39:20.442 05:34:02 keyring_file -- keyring/file.sh@21 -- # killprocess 791674 00:39:20.442 05:34:02 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 791674 ']' 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@958 -- # kill -0 791674 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791674 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791674' 00:39:20.443 killing process with pid 791674 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@973 -- # kill 791674 00:39:20.443 05:34:02 keyring_file -- common/autotest_common.sh@978 -- # wait 791674 00:39:21.011 00:39:21.011 real 0m13.117s 00:39:21.011 user 0m31.073s 00:39:21.011 sys 0m3.401s 00:39:21.011 05:34:03 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.011 05:34:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:21.011 ************************************ 00:39:21.011 END TEST keyring_file 00:39:21.011 ************************************ 00:39:21.011 05:34:03 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:21.011 05:34:03 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:21.011 05:34:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:21.011 05:34:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.011 05:34:03 -- common/autotest_common.sh@10 -- # set +x 00:39:21.011 ************************************ 00:39:21.011 START TEST keyring_linux 00:39:21.011 ************************************ 00:39:21.011 05:34:03 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:21.011 Joined session keyring: 584377161 00:39:21.011 * Looking for test storage... 00:39:21.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:21.011 05:34:03 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:21.011 05:34:03 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:21.011 05:34:03 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:21.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.271 --rc genhtml_branch_coverage=1 00:39:21.271 --rc genhtml_function_coverage=1 00:39:21.271 --rc genhtml_legend=1 00:39:21.271 --rc geninfo_all_blocks=1 00:39:21.271 --rc geninfo_unexecuted_blocks=1 00:39:21.271 00:39:21.271 ' 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:21.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.271 --rc genhtml_branch_coverage=1 00:39:21.271 --rc genhtml_function_coverage=1 00:39:21.271 --rc genhtml_legend=1 00:39:21.271 --rc geninfo_all_blocks=1 00:39:21.271 --rc geninfo_unexecuted_blocks=1 00:39:21.271 00:39:21.271 ' 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:21.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.271 --rc genhtml_branch_coverage=1 00:39:21.271 --rc genhtml_function_coverage=1 00:39:21.271 --rc genhtml_legend=1 00:39:21.271 --rc geninfo_all_blocks=1 00:39:21.271 --rc geninfo_unexecuted_blocks=1 00:39:21.271 00:39:21.271 ' 00:39:21.271 05:34:03 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:21.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.271 --rc genhtml_branch_coverage=1 00:39:21.271 --rc genhtml_function_coverage=1 00:39:21.271 --rc genhtml_legend=1 00:39:21.271 --rc geninfo_all_blocks=1 00:39:21.271 --rc geninfo_unexecuted_blocks=1 00:39:21.271 00:39:21.271 ' 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.271 05:34:03 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.271 05:34:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.271 05:34:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.271 05:34:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.271 05:34:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:21.271 05:34:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:21.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:21.271 05:34:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:21.271 05:34:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:21.271 05:34:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:21.271 /tmp/:spdk-test:key0 00:39:21.272 05:34:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:21.272 05:34:03 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:21.272 05:34:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:21.272 /tmp/:spdk-test:key1 00:39:21.272 05:34:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=794041 00:39:21.272 05:34:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:21.272 05:34:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 794041 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 794041 ']' 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:21.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:21.272 05:34:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:21.272 [2024-12-09 05:34:03.686805] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:39:21.272 [2024-12-09 05:34:03.686869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794041 ] 00:39:21.531 [2024-12-09 05:34:03.781364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.531 [2024-12-09 05:34:03.823744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.098 05:34:04 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:22.098 05:34:04 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:22.098 05:34:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:22.098 05:34:04 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.098 05:34:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:22.098 [2024-12-09 05:34:04.526164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:22.098 null0 00:39:22.098 [2024-12-09 05:34:04.558230] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:22.098 [2024-12-09 05:34:04.558648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:22.357 05:34:04 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.357 05:34:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:22.357 46308882 00:39:22.357 05:34:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:22.357 991916649 00:39:22.357 05:34:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=794107 00:39:22.358 05:34:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:22.358 05:34:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 794107 /var/tmp/bperf.sock 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 794107 ']' 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:22.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:22.358 05:34:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:22.358 [2024-12-09 05:34:04.634839] Starting SPDK v25.01-pre git sha1 cabd61f7f / DPDK 24.03.0 initialization... 00:39:22.358 [2024-12-09 05:34:04.634884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794107 ] 00:39:22.358 [2024-12-09 05:34:04.729212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.358 [2024-12-09 05:34:04.769584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:23.302 05:34:05 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:23.302 05:34:05 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:23.302 05:34:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:23.302 05:34:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:23.302 05:34:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:23.302 05:34:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:23.560 05:34:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:23.560 05:34:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:23.817 [2024-12-09 05:34:06.068115] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:23.817 nvme0n1 00:39:23.817 05:34:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:23.817 05:34:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:23.817 05:34:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:23.817 05:34:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:23.817 05:34:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:23.817 05:34:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.075 05:34:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:24.075 05:34:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:24.075 05:34:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:24.075 05:34:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:24.075 05:34:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:24.075 05:34:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:24.075 05:34:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@25 -- # sn=46308882 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 46308882 == \4\6\3\0\8\8\8\2 ]] 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 46308882 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:24.334 05:34:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:24.334 Running I/O for 1 seconds... 00:39:25.268 20273.00 IOPS, 79.19 MiB/s 00:39:25.268 Latency(us) 00:39:25.268 [2024-12-09T04:34:07.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.268 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:25.268 nvme0n1 : 1.01 20273.37 79.19 0.00 0.00 6291.72 5085.59 13631.49 00:39:25.268 [2024-12-09T04:34:07.738Z] =================================================================================================================== 00:39:25.268 [2024-12-09T04:34:07.738Z] Total : 20273.37 79.19 0.00 0.00 6291.72 5085.59 13631.49 00:39:25.268 { 00:39:25.268 "results": [ 00:39:25.268 { 00:39:25.268 "job": "nvme0n1", 00:39:25.268 "core_mask": "0x2", 00:39:25.268 "workload": "randread", 00:39:25.268 "status": "finished", 00:39:25.268 "queue_depth": 128, 00:39:25.268 "io_size": 4096, 00:39:25.268 "runtime": 1.006345, 00:39:25.268 "iops": 20273.365495928334, 00:39:25.268 "mibps": 79.19283396847005, 00:39:25.268 "io_failed": 0, 00:39:25.268 "io_timeout": 0, 00:39:25.268 "avg_latency_us": 6291.723900323497, 00:39:25.268 "min_latency_us": 5085.5936, 00:39:25.268 "max_latency_us": 13631.488 00:39:25.268 } 00:39:25.268 ], 00:39:25.268 "core_count": 1 00:39:25.268 } 00:39:25.268 05:34:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:25.268 05:34:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:25.527 05:34:07 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:25.527 05:34:07 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:25.527 05:34:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:25.527 05:34:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:25.527 05:34:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:25.527 05:34:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:25.786 05:34:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:25.786 05:34:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:25.786 05:34:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:25.786 05:34:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:25.786 05:34:08 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:25.786 05:34:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:26.046 [2024-12-09 05:34:08.259007] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:26.047 [2024-12-09 05:34:08.259302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x790570 (107): Transport endpoint is not connected 00:39:26.047 [2024-12-09 05:34:08.260296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x790570 (9): Bad file descriptor 00:39:26.047 [2024-12-09 05:34:08.261298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:26.047 [2024-12-09 05:34:08.261312] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:26.047 [2024-12-09 05:34:08.261332] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:26.047 [2024-12-09 05:34:08.261348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:26.047 request: 00:39:26.047 { 00:39:26.047 "name": "nvme0", 00:39:26.047 "trtype": "tcp", 00:39:26.047 "traddr": "127.0.0.1", 00:39:26.047 "adrfam": "ipv4", 00:39:26.047 "trsvcid": "4420", 00:39:26.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:26.047 "prchk_reftag": false, 00:39:26.047 "prchk_guard": false, 00:39:26.047 "hdgst": false, 00:39:26.047 "ddgst": false, 00:39:26.047 "psk": ":spdk-test:key1", 00:39:26.047 "allow_unrecognized_csi": false, 00:39:26.047 "method": "bdev_nvme_attach_controller", 00:39:26.047 "req_id": 1 00:39:26.047 } 00:39:26.047 Got JSON-RPC error response 00:39:26.047 response: 00:39:26.047 { 00:39:26.047 "code": -5, 00:39:26.047 "message": "Input/output error" 00:39:26.047 } 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@33 -- # sn=46308882 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 46308882 00:39:26.047 1 links removed 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@33 -- # sn=991916649 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 991916649 00:39:26.047 1 links removed 00:39:26.047 05:34:08 keyring_linux -- keyring/linux.sh@41 -- # killprocess 794107 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 794107 ']' 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 794107 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794107 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794107' 00:39:26.047 killing process with pid 794107 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 794107 00:39:26.047 Received shutdown signal, test time was about 1.000000 seconds 00:39:26.047 00:39:26.047 Latency(us) 00:39:26.047 [2024-12-09T04:34:08.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.047 [2024-12-09T04:34:08.517Z] =================================================================================================================== 00:39:26.047 [2024-12-09T04:34:08.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:26.047 05:34:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 794107 00:39:26.307 05:34:08 keyring_linux -- keyring/linux.sh@42 -- # killprocess 794041 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 794041 ']' 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 794041 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794041 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794041' 00:39:26.307 killing process with pid 794041 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@973 -- # kill 794041 00:39:26.307 05:34:08 keyring_linux -- common/autotest_common.sh@978 -- # wait 794041 00:39:26.567 00:39:26.567 real 0m5.654s 00:39:26.567 user 0m10.298s 00:39:26.567 sys 0m1.719s 00:39:26.567 05:34:08 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.567 05:34:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:26.567 ************************************ 00:39:26.567 END TEST keyring_linux 00:39:26.567 ************************************ 00:39:26.567 05:34:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:26.567 05:34:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:26.567 05:34:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:26.567 05:34:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:26.567 05:34:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:26.567 05:34:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:26.567 05:34:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:26.567 05:34:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:26.567 05:34:09 -- common/autotest_common.sh@10 -- # set +x 00:39:26.567 05:34:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:26.567 05:34:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:26.567 05:34:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:26.567 05:34:09 -- common/autotest_common.sh@10 -- # set +x 00:39:33.156 INFO: APP EXITING 00:39:33.156 INFO: killing all VMs 00:39:33.156 INFO: killing vhost app 00:39:33.156 INFO: EXIT DONE 00:39:36.447 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:36.447 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:36.706 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:36.965 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:36.965 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:36.965 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:36.965 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:36.965 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:39:40.254 Cleaning 00:39:40.254 Removing: /var/run/dpdk/spdk0/config 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:40.255 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:40.255 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:40.255 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:40.255 Removing: /var/run/dpdk/spdk1/config 00:39:40.255 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:40.255 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:40.514 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:40.514 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:40.514 Removing: /var/run/dpdk/spdk2/config 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:40.514 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:40.514 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:40.514 Removing: /var/run/dpdk/spdk3/config 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:40.514 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:40.514 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:40.514 Removing: /var/run/dpdk/spdk4/config 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:40.514 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:40.514 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:40.514 Removing: /dev/shm/bdev_svc_trace.1 00:39:40.514 Removing: /dev/shm/nvmf_trace.0 00:39:40.514 Removing: /dev/shm/spdk_tgt_trace.pid273916 00:39:40.514 Removing: /var/run/dpdk/spdk0 00:39:40.514 Removing: /var/run/dpdk/spdk1 00:39:40.514 Removing: /var/run/dpdk/spdk2 00:39:40.514 Removing: /var/run/dpdk/spdk3 00:39:40.514 Removing: /var/run/dpdk/spdk4 00:39:40.514 Removing: /var/run/dpdk/spdk_pid271222 00:39:40.514 Removing: /var/run/dpdk/spdk_pid272494 00:39:40.514 Removing: /var/run/dpdk/spdk_pid273916 00:39:40.514 Removing: /var/run/dpdk/spdk_pid274525 00:39:40.773 Removing: /var/run/dpdk/spdk_pid275528 00:39:40.773 Removing: /var/run/dpdk/spdk_pid275800 00:39:40.773 Removing: /var/run/dpdk/spdk_pid276915 00:39:40.773 Removing: /var/run/dpdk/spdk_pid276933 00:39:40.773 Removing: /var/run/dpdk/spdk_pid277325 00:39:40.773 Removing: /var/run/dpdk/spdk_pid279057 00:39:40.773 Removing: /var/run/dpdk/spdk_pid280520 00:39:40.773 Removing: /var/run/dpdk/spdk_pid280893 00:39:40.773 Removing: /var/run/dpdk/spdk_pid281333 00:39:40.773 Removing: /var/run/dpdk/spdk_pid281779 00:39:40.773 Removing: /var/run/dpdk/spdk_pid282106 00:39:40.773 Removing: /var/run/dpdk/spdk_pid282391 00:39:40.773 Removing: /var/run/dpdk/spdk_pid282564 00:39:40.773 Removing: /var/run/dpdk/spdk_pid282885 00:39:40.773 Removing: /var/run/dpdk/spdk_pid283859 00:39:40.773 Removing: /var/run/dpdk/spdk_pid287581 00:39:40.773 Removing: /var/run/dpdk/spdk_pid287890 00:39:40.773 Removing: /var/run/dpdk/spdk_pid288195 00:39:40.773 Removing: /var/run/dpdk/spdk_pid288452 00:39:40.773 Removing: /var/run/dpdk/spdk_pid289014 00:39:40.773 Removing: /var/run/dpdk/spdk_pid289035 00:39:40.773 Removing: /var/run/dpdk/spdk_pid289600 00:39:40.773 Removing: /var/run/dpdk/spdk_pid289859 00:39:40.773 Removing: /var/run/dpdk/spdk_pid290163 00:39:40.773 Removing: /var/run/dpdk/spdk_pid290168 00:39:40.773 Removing: /var/run/dpdk/spdk_pid290458 00:39:40.773 Removing: /var/run/dpdk/spdk_pid290502 00:39:40.773 Removing: /var/run/dpdk/spdk_pid291114 00:39:40.773 Removing: /var/run/dpdk/spdk_pid291401 00:39:40.773 Removing: /var/run/dpdk/spdk_pid291735 00:39:40.773 Removing: /var/run/dpdk/spdk_pid295881 00:39:40.773 Removing: /var/run/dpdk/spdk_pid300716 00:39:40.773 Removing: /var/run/dpdk/spdk_pid311390 00:39:40.773 Removing: /var/run/dpdk/spdk_pid311945 00:39:40.773 Removing: /var/run/dpdk/spdk_pid316640 00:39:40.773 Removing: /var/run/dpdk/spdk_pid317044 00:39:40.773 Removing: /var/run/dpdk/spdk_pid321601 00:39:40.773 Removing: /var/run/dpdk/spdk_pid328020 00:39:40.773 Removing: /var/run/dpdk/spdk_pid331078 00:39:40.773 Removing: /var/run/dpdk/spdk_pid342635 00:39:40.773 Removing: /var/run/dpdk/spdk_pid352457 00:39:40.773 Removing: /var/run/dpdk/spdk_pid354318 00:39:40.773 Removing: /var/run/dpdk/spdk_pid355145 00:39:40.773 Removing: /var/run/dpdk/spdk_pid373570 00:39:40.773 Removing: /var/run/dpdk/spdk_pid378004 00:39:40.773 Removing: /var/run/dpdk/spdk_pid427718 00:39:40.773 Removing: /var/run/dpdk/spdk_pid434024 00:39:40.773 Removing: /var/run/dpdk/spdk_pid440157 00:39:40.773 Removing: /var/run/dpdk/spdk_pid447324 00:39:40.773 Removing: /var/run/dpdk/spdk_pid447439 00:39:40.773 Removing: /var/run/dpdk/spdk_pid448319 00:39:40.773 Removing: /var/run/dpdk/spdk_pid449161 00:39:40.773 Removing: /var/run/dpdk/spdk_pid450170 00:39:40.773 Removing: /var/run/dpdk/spdk_pid450712 00:39:40.773 Removing: /var/run/dpdk/spdk_pid450727 00:39:40.773 Removing: /var/run/dpdk/spdk_pid451000 00:39:41.032 Removing: /var/run/dpdk/spdk_pid451247 00:39:41.032 Removing: /var/run/dpdk/spdk_pid451258 00:39:41.032 Removing: /var/run/dpdk/spdk_pid452212 00:39:41.032 Removing: /var/run/dpdk/spdk_pid453109 00:39:41.032 Removing: /var/run/dpdk/spdk_pid454043 00:39:41.032 Removing: /var/run/dpdk/spdk_pid454719 00:39:41.032 Removing: /var/run/dpdk/spdk_pid454721 00:39:41.032 Removing: /var/run/dpdk/spdk_pid454996 00:39:41.032 Removing: /var/run/dpdk/spdk_pid456233 00:39:41.032 Removing: /var/run/dpdk/spdk_pid457498 00:39:41.032 Removing: /var/run/dpdk/spdk_pid466261 00:39:41.032 Removing: /var/run/dpdk/spdk_pid496671 00:39:41.032 Removing: /var/run/dpdk/spdk_pid501517 00:39:41.032 Removing: /var/run/dpdk/spdk_pid503349 00:39:41.032 Removing: /var/run/dpdk/spdk_pid505335 00:39:41.032 Removing: /var/run/dpdk/spdk_pid505609 00:39:41.032 Removing: /var/run/dpdk/spdk_pid506006 00:39:41.032 Removing: /var/run/dpdk/spdk_pid506569 00:39:41.033 Removing: /var/run/dpdk/spdk_pid507203 00:39:41.033 Removing: /var/run/dpdk/spdk_pid509106 00:39:41.033 Removing: /var/run/dpdk/spdk_pid510176 00:39:41.033 Removing: /var/run/dpdk/spdk_pid510767 00:39:41.033 Removing: /var/run/dpdk/spdk_pid513173 00:39:41.033 Removing: /var/run/dpdk/spdk_pid513984 00:39:41.033 Removing: /var/run/dpdk/spdk_pid514623 00:39:41.033 Removing: /var/run/dpdk/spdk_pid519173 00:39:41.033 Removing: /var/run/dpdk/spdk_pid525084 00:39:41.033 Removing: /var/run/dpdk/spdk_pid525086 00:39:41.033 Removing: /var/run/dpdk/spdk_pid525087 00:39:41.033 Removing: /var/run/dpdk/spdk_pid529283 00:39:41.033 Removing: /var/run/dpdk/spdk_pid538576 00:39:41.033 Removing: /var/run/dpdk/spdk_pid542833 00:39:41.033 Removing: /var/run/dpdk/spdk_pid549378 00:39:41.033 Removing: /var/run/dpdk/spdk_pid551368 00:39:41.033 Removing: /var/run/dpdk/spdk_pid552924 00:39:41.033 Removing: /var/run/dpdk/spdk_pid554424 00:39:41.033 Removing: /var/run/dpdk/spdk_pid559410 00:39:41.033 Removing: /var/run/dpdk/spdk_pid564122 00:39:41.033 Removing: /var/run/dpdk/spdk_pid568662 00:39:41.033 Removing: /var/run/dpdk/spdk_pid576694 00:39:41.033 Removing: /var/run/dpdk/spdk_pid576832 00:39:41.033 Removing: /var/run/dpdk/spdk_pid581799 00:39:41.033 Removing: /var/run/dpdk/spdk_pid582064 00:39:41.033 Removing: /var/run/dpdk/spdk_pid582296 00:39:41.033 Removing: /var/run/dpdk/spdk_pid582814 00:39:41.033 Removing: /var/run/dpdk/spdk_pid582831 00:39:41.033 Removing: /var/run/dpdk/spdk_pid587810 00:39:41.033 Removing: /var/run/dpdk/spdk_pid588307 00:39:41.033 Removing: /var/run/dpdk/spdk_pid593204 00:39:41.033 Removing: /var/run/dpdk/spdk_pid596014 00:39:41.033 Removing: /var/run/dpdk/spdk_pid602408 00:39:41.033 Removing: /var/run/dpdk/spdk_pid608328 00:39:41.033 Removing: /var/run/dpdk/spdk_pid617569 00:39:41.033 Removing: /var/run/dpdk/spdk_pid625367 00:39:41.033 Removing: /var/run/dpdk/spdk_pid625372 00:39:41.033 Removing: /var/run/dpdk/spdk_pid646077 00:39:41.033 Removing: /var/run/dpdk/spdk_pid646957 00:39:41.033 Removing: /var/run/dpdk/spdk_pid647601 00:39:41.292 Removing: /var/run/dpdk/spdk_pid648142 00:39:41.292 Removing: /var/run/dpdk/spdk_pid648998 00:39:41.292 Removing: /var/run/dpdk/spdk_pid649752 00:39:41.292 Removing: /var/run/dpdk/spdk_pid650344 00:39:41.292 Removing: /var/run/dpdk/spdk_pid651046 00:39:41.292 Removing: /var/run/dpdk/spdk_pid655701 00:39:41.292 Removing: /var/run/dpdk/spdk_pid655967 00:39:41.292 Removing: /var/run/dpdk/spdk_pid662305 00:39:41.292 Removing: /var/run/dpdk/spdk_pid662481 00:39:41.292 Removing: /var/run/dpdk/spdk_pid668349 00:39:41.292 Removing: /var/run/dpdk/spdk_pid672782 00:39:41.292 Removing: /var/run/dpdk/spdk_pid683111 00:39:41.292 Removing: /var/run/dpdk/spdk_pid683740 00:39:41.292 Removing: /var/run/dpdk/spdk_pid688137 00:39:41.292 Removing: /var/run/dpdk/spdk_pid688566 00:39:41.292 Removing: /var/run/dpdk/spdk_pid693628 00:39:41.292 Removing: /var/run/dpdk/spdk_pid699613 00:39:41.292 Removing: /var/run/dpdk/spdk_pid702446 00:39:41.292 Removing: /var/run/dpdk/spdk_pid713162 00:39:41.292 Removing: /var/run/dpdk/spdk_pid722544 00:39:41.292 Removing: /var/run/dpdk/spdk_pid724387 00:39:41.292 Removing: /var/run/dpdk/spdk_pid725190 00:39:41.292 Removing: /var/run/dpdk/spdk_pid743223 00:39:41.292 Removing: /var/run/dpdk/spdk_pid747363 00:39:41.292 Removing: /var/run/dpdk/spdk_pid750211 00:39:41.292 Removing: /var/run/dpdk/spdk_pid758730 00:39:41.292 Removing: /var/run/dpdk/spdk_pid758861 00:39:41.292 Removing: /var/run/dpdk/spdk_pid764467 00:39:41.292 Removing: /var/run/dpdk/spdk_pid766464 00:39:41.292 Removing: /var/run/dpdk/spdk_pid768505 00:39:41.292 Removing: /var/run/dpdk/spdk_pid769646 00:39:41.292 Removing: /var/run/dpdk/spdk_pid771723 00:39:41.292 Removing: /var/run/dpdk/spdk_pid772941 00:39:41.292 Removing: /var/run/dpdk/spdk_pid782941 00:39:41.292 Removing: /var/run/dpdk/spdk_pid783478 00:39:41.292 Removing: /var/run/dpdk/spdk_pid784013 00:39:41.292 Removing: /var/run/dpdk/spdk_pid786486 00:39:41.292 Removing: /var/run/dpdk/spdk_pid787017 00:39:41.292 Removing: /var/run/dpdk/spdk_pid787551 00:39:41.292 Removing: /var/run/dpdk/spdk_pid791674 00:39:41.292 Removing: /var/run/dpdk/spdk_pid791754 00:39:41.292 Removing: /var/run/dpdk/spdk_pid793422 00:39:41.292 Removing: /var/run/dpdk/spdk_pid794041 00:39:41.292 Removing: /var/run/dpdk/spdk_pid794107 00:39:41.292 Clean 00:39:41.551 05:34:23 -- common/autotest_common.sh@1453 -- # return 0 00:39:41.551 05:34:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:41.551 05:34:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.551 05:34:23 -- common/autotest_common.sh@10 -- # set +x 00:39:41.551 05:34:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:41.551 05:34:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:41.551 05:34:23 -- common/autotest_common.sh@10 -- # set +x 00:39:41.551 05:34:23 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:41.551 05:34:23 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:41.551 05:34:23 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:41.551 05:34:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:41.551 05:34:23 -- spdk/autotest.sh@398 -- # hostname 00:39:41.551 05:34:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:41.809 geninfo: WARNING: invalid characters removed from testname! 00:40:03.746 05:34:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:05.650 05:34:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:07.557 05:34:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:09.103 05:34:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:11.081 05:34:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:12.459 05:34:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:14.364 05:34:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:14.364 05:34:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:14.364 05:34:56 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:14.364 05:34:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:14.364 05:34:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:14.364 05:34:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:14.364 + [[ -n 191064 ]] 00:40:14.364 + sudo kill 191064 00:40:14.373 [Pipeline] } 00:40:14.388 [Pipeline] // stage 00:40:14.393 [Pipeline] } 00:40:14.403 [Pipeline] // timeout 00:40:14.408 [Pipeline] } 00:40:14.418 [Pipeline] // catchError 00:40:14.422 [Pipeline] } 00:40:14.432 [Pipeline] // wrap 00:40:14.438 [Pipeline] } 00:40:14.447 [Pipeline] // catchError 00:40:14.455 [Pipeline] stage 00:40:14.457 [Pipeline] { (Epilogue) 00:40:14.469 [Pipeline] catchError 00:40:14.471 [Pipeline] { 00:40:14.482 [Pipeline] echo 00:40:14.483 Cleanup processes 00:40:14.490 [Pipeline] sh 00:40:14.777 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:14.777 809581 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:14.789 [Pipeline] sh 00:40:15.074 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:15.074 ++ grep -v 'sudo pgrep' 00:40:15.074 ++ awk '{print $1}' 00:40:15.074 + sudo kill -9 00:40:15.074 + true 00:40:15.085 [Pipeline] sh 00:40:15.371 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:15.371 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:40:21.939 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:40:26.153 [Pipeline] sh 00:40:26.440 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:26.440 Artifacts sizes are good 00:40:26.456 [Pipeline] archiveArtifacts 00:40:26.464 Archiving artifacts 00:40:26.595 [Pipeline] sh 00:40:26.883 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:26.898 [Pipeline] cleanWs 00:40:26.908 [WS-CLEANUP] Deleting project workspace... 00:40:26.908 [WS-CLEANUP] Deferred wipeout is used... 00:40:26.915 [WS-CLEANUP] done 00:40:26.917 [Pipeline] } 00:40:26.934 [Pipeline] // catchError 00:40:26.946 [Pipeline] sh 00:40:27.234 + logger -p user.info -t JENKINS-CI 00:40:27.244 [Pipeline] } 00:40:27.258 [Pipeline] // stage 00:40:27.263 [Pipeline] } 00:40:27.278 [Pipeline] // node 00:40:27.283 [Pipeline] End of Pipeline 00:40:27.322 Finished: SUCCESS